In my exploration, mostly through random blog posts and this great Pragmatic Studio course, I’ve found one thing difficult.
These fancy pants ML-like programming languages have way too many un-googleable function names in the standard library!
I’m going to list a few of the unrecognizable ones here and give you my own name for them. Let me know more and I will add them. Bookmark this page if you ever want to be able to find out what one of these wild and crazy characters in your Elm source are really for.
`andThen`
docs The use-as-infix syntax.What else do you find ungoogleable in Elm? I hope this helps many others on the same journey to understanding Elm.
]]>We need to consistently prioritize updating our frameworks/libraries/dependencies. Having unit tests makes this much easier, but even if you don’t have them, it’s still necessary.
Brushing your teeth is something that you do extremely regularly and have built up such a habit that you can’t really imagine life without it. Flossing is a nice to have that many of us skip with impunity. It doesn’t necessarily have to be once a day, but thinking about updating on a regular, predefined basis will force you to need to consciously make the choice to disregard instead of absentmindedly forgetting.
On teams, you should make sure that there’s a recurring task/calendar event every month to ensure that your codebases are up to date. This is most important with frameworks/impactful libraries like Ember/Angular/React, but it is just as helpful for anything inside of the world of Node/Ruby/Python/Java/anything-with-dependencies.
Once you make this as much of a habit as brushing your teeth, you won’t even need to think about it because it becomes every team member’s shared expectation. Use reverse-broken-window syndrome to your advantage. If everyone is doing it, each person will keep up with the crowd.
I still use the plain old toothbrushes that I get for free every six months from my dentist, but I know that many are bigger fans of the electric/sonic variety. Gratefully, they exist for programmers trying to keep depencenies up to date too!
This is a great tool by the makers of Hoodie that will automagically open PRs against your npm project when a dependency releases a new version.
This tool will send you emails whenever a dependency creates a new GitHub “Release”
For open source Node projects, VersionEye will show you which entries in your package.json are out of date
There’s nothing stopping you from starting this practice right now. You are resourceful. You could make a slackbot for this, setup a shared calendar, or any other myriad solutions. All I can tell you is that once you do, your team will be better off.
Here’s to gingivitis free software!
]]>This is their marketing copy. They say that Tonic removes friction. I was very surprised by how true that was.
Any require
statement is automatically parsed, and the dependency is added to the implcit package.json file that backs every “Notebook”. You don’t have to fiddle around with the JSON yourself, but rather just add var request = require('request');
and you automagically have the current version of request. There’s also some special additional syntax to use in your requrie statement if you do happen to require specific versions.
The killer feature of this tool though is not the npm integration, but rather its ability to create on the fly APIs. I’ve been looking for something like this from a lot of other tools. I have wished there was a way to create a one liner API and deploy to Heroku as an endpoint by only using the web. Until now, I haven’t had anything like that. Now, not only do I get an API in 2 clicks and 3 lines of code, but it also supports CORS!
Creating a new http endpoint is as easy as clicking on compose in gmail. Clicking on new Notebook, and then writing this code, got me a passthrough to a non CORS, but open endpoint in < 5 minutes.
No more needing to spin up node on my VPS just to CORS enable APIs!
This is basically a two liner (of not boilerplate) to get my own Duolingo profile. In case you don’t know, I’m extremely addicted to Duolingo and many of my side projects are tools to remind me to practice learning languages every day. Tonic is going to make that tool writing so much easier! I don’t need node proxies anymore, I can just build my apps in JS Bin, calling to this new Tonic-built endpoint and be done in minutes!
I’m really excited about this; I’m sure you can tell. See you in the comments!
]]>The idea behind the BPUR is that some programming language concepts take many readings of many blog posts and other resources in order to fully comprehend. On the other hand, there are many topics that are much more readily digested. A high BPUR means that I need to consult with many resources in order to grok something, whereas a low BPUR is something that we can pick up without significant intentional thought. It helps me frame how complicated the topic is and how fast I can expect myself to understand a new topic.
For example, if someone was to start learning JavaScript from scratch, at some point they’d definitely need to understand the idea of assigning to a variable. Most developers are familiar with the typical C family of programming languages. The ideas are fairly straightforward; assignment statements have equal signs in them and I already understand the semantics of a variable assignment. I would set the BPUR for how to assign a variable in JavaScript very very low.
However, being able to understand something like prototypal inheritance and that there are actually two distinct, albeit similar, uses of the word “prototype” takes much more time. Prototypal inheritance has taken me dozens of blog posts to really understand. Many years ago, I had to Google, find some blog posts. Read them. Feel like I really didn’t understand after reading them. Then go back, read more. Then wait a couple of days. Let the whole concept sink in a little bit. Still not understand. Rubberduck a little bit with myself by thinking about it out loud. Then go to Google again, read some more blog posts. Then finally, that knowledge coalesced and then I understood prototypal inheritance. This is a topic that I would say has a very high BPUR.
That high BPUR means that I can’t expect to immediately go in and understand this. For me, thinking in this metric allows me to reset my expectations around speed of learning. I don’t want to be demoralized or frustrated by thinking that I should understand this in 30 seconds. Some topics are more complicated than that and it would be okay if it takes me a week, or even a couple weeks to fully understand and to fully grok. That’s okay. It has a high BPUR.
As I’m spending more and more time learning programming languages for fun, this is a very helpful way for me to think. It keeps my morale up and it allows me to keep persisting to learn new things. The only thing that’s missing is an understanding of what the BPUR is for all concepts in all programming languages.
This is something that I am currently thinking about how to fix. I’d love a chart like this for every programming language and every topic within it:
I’d love to hear all of your ideas and thoughts about this metric. I don’t think that 10 is necessarily the correct high BPUR, but that’s going to be something that I continue thinking about. I am going to be working in future posts on how to document and distribute information like this. Maybe we can crowdsource a shared library of BPURs for every concept in every programming language.
Comments are below and there’s always twitter for 140-character discussion!
]]>big data
, web components
, transpiling
, build systems
and so much more.
While even I have the tendency to roll my eyes at some of these things, I want to also mention that buzzwords typically do have some actual reason to exist. All of the items that I listed earlier are actually very valuable and interesting technologies, even if talking about them might make some of us want to throw up in our mouths, just a little bit. I try to make sure that I discount some of the zeitgeist, but also consider how I can apply these buzzwords practically.
Now, let’s shift gears into the two buzzwords that are still on the tips of everyone’s tongue.
Virtual DOM
and functional programming
!The Virtual DOM, as a concept, is extremely popular across many JavaScript development tools and mythologies. It is a very valuable tool that is inspring performance optimizations in many frameworks and libraries. At its core, the Virtual DOM is a performant, in-memory tree diffing algorithm.
Take for example, these two trees.
From the left tree to the right tree, there is one single difference. As a human, this is a bit difficult to spot, but as a machine, especially using the Virtual DOM algortihm, it’s really fast. The Virtual DOM can figure out, very quickly, that one leaf node has switched to a different parent.
I’m not going to try to explain, nor pretend to understant the rigors of category theory or mathematics in the post. What I’d like to do, instead, is take you back to when functions were first introduced to you. Back in Algebra 1, you didn’t think of functions as composable, discrete units of referential transparency and purity. You just thought of it as f(x)
.
Do you remember seeing something like this?
How straightforward is that! Given one input value, a function produces one and only one output value. This is the crux of the functional programming buzzword and philosophy that I’d like you to keep in your head throughout this post.
Compared to the other popular frameworks, it’s not the Virtual DOM that sets React apart. What really makes React special is the fact that you can use it to think about your application as simply as a function.
Using React, your DOM is a function of all of your state. If there were a single high-school-reminiscient formula for what React does, it would be this…
This is very special in the single page web application landscape. Typically, using Backbone listeners, or Ember observers or Angular watchers, we end up with something a little more complicated…
Focus on the word “try” in there. That’s the key. React doesn’t try to keep data and view in sync. The view is always a function of the data, so there is no extra work needed to keep everything in sync.
In React, you never need to directly create any DOM. React creates DOM for you and updates the DOM for you. All you have to do, as a developer, is specify what the DOM should look like for any given state of your application. This allows you the freedom to only need to think of any one view as a whole instead of having to think through everything that needs to change when your data changes.
This concept is called “Unidirectional Data Flow” in React. Your data changes in only one place, and whenever that data is changed, the view is updated accordingly.
I gave a talk form of this post, in much more detail at React Rally and NationJS last year. Click play below if you’d like to learn more.
]]>In that time, I’ve gotten married. I did happen to tweet one picture of it. It was one of the few pictures (grainy, and from a cell phone) that we had for a while.
I loved @jennshiffer’s response to it
@johnkpaul @kosamari @rhodesjason @janecakemaster @jsconf @SkylarPanuska Poland Spring: what it means to be from Jkp and skylar's wedding™
— jennmoneydollars (@jennschiffer) December 6, 2015
And am grateful for all of the many wholehearted congratulations and well-wishes we received that day, and many times since.
The reason we ended up with the Poland Spring was that most of the night, we were like this picture below. 🎉 🍸🍷 🍸🍷 🎉 Also, wow, professional photographers are awesome.
I was and am still pretty damn happy about the wedding. We have another month to get the rest of the pictures, and in the meantime there’s a whole slideshow of amazing pictures in case you’d like to take a look.
Since all of the hubbub around the wedding has happened and the planning that took many many months, I’ve been slowly slipping away from Twitter, my social media drug of choice. I still read it every day, but I was engaging with it less and less. Except for when I was at conferences, a few of which coincided with this exciting time in my personal life, I didn’t pay that much attention.
My honeymoon was at a cabin-hotel in the Poconos where you’re not supposed to use the internet in public. The cell service isn’t even that great in the middle of nowhere. My time away from social media was awesome. It was filled with long conversations with human beings, a favorite pastime of mine for a long time anyway. Typically, this was with my new wife, but there were also more conversations with other friends and most notably, family.
I didn’t really expect to find how much a wedding can bring family together. I am still consistently surprised (kind of an oxymoron, right?) at how much my family has been communicating more frequently and about more varied subjects since then. What before used to just be “So you’re still alive and have a job right?” is now more like “How was your day?”
Apparently all of this time away from social media opened up a whole new world of closeness and warmth between me and my family, but eventually we all have to get back to the real world. The real world with jobs, obligations, regular outings/dinners/karaokes with my own social circle, planning my own meetup and all sorts of other non-wedding related things. Since New Years Day, the last day that I saw anyone in my family but my wife, I have been trying to come up with ways to continue this closeness without losing sleep or sanity.
This transition to family and friends coalensence has been really fun for me and I’m finding that keeping it up requires something quite unusual for me.
I have had a Facebook account for over a decade, but I have logged into it a total of maybe 10 times since 2006. That’s approximately one time per year. It’s pretty much only when it’s my birthday, as it is today, that I log in to see all of the warm wishes and regret/be guilty that I’m not individually responding to every happy birthday message.
My family, on the other hand, has been using Facebook EXTREMELY regularly for that entire decade. Every week, they post over 10 times what I have posted in the entire time that I’ve been on Facebook. My job, according to Facebook, is my first job out of college and I’ve never updated it.
Apparently, to keep in touch normally, I have to learn how the youths communicate nowadays and use Facebook. I know that the real youths are on snapchat and vine now, but please give me some credit. I’m not old and crotchety yet, but since Facebook was starting while I was in college, I can’t imagine a world where only the old people are using Facebook.
I never thought I’d get here, let alone, at my age. I am going to start trying to use Facebook like a normal person. Well, somewhat like a normal person. I don’t want to waste hours mindlessly scrolling (not that that’s what you do, that’s just what I would do). I do want to consistently communicate and update my family and friends. Consistently. Not Frequently. I don’t want to be letting you know about every cupcake that I ogle and don’t buy or every disruptive startup idea that pops into my head. I do want to make sure that the people that I care about and the people that care about me know what I’m up to, and vice versa.
Maybe this means there’ll be some pruning or organization. Maybe this means that I’m going to declare friend bankruptcy. I might even respond to a poke or two from a decade ago. I don’t really know. I’ll figure it out sometime in 2016, I’m sure.
If you have any advice for someone joining Facebook this late in the game, please let me know. Honestly, I’d love to hear your thoughts. I feel like I’m the only person that’s ever gone this direction.
]]>Promethify is a browserify transform that allows for the async module loading that RequireJS solves for out of the box. It’s intended to be used to only load what you need.
It allows you to specify what modules are needed to be loaded dyanmically just like RequireJS. Pass as it’s first parameter, an array of dependencies instead of strings, and boom, everything else just works.
You can use it to write code like this in browserify:
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 7 8 9 |
|
There are a few exising alternative approaches out there to achieve lazy loading with browserify, but they all have the same large con for me. They all require significant build/configuration changes for each module that needs to be asynchronously loaded.
The few goals for this project in relation to the other solutions were to
I am hopeful that this will open up a whole new world of possibilities for browserify. As a huge RequireJS fan, this is something that I’ve sorely missed as I’ve used browserify on larger applications.
Please let me know your thoughts in the comments or on twitter. I’d love to hear why this is a great/horrible idea, or how I can make it better.
]]>1 2 3 4 5 6 |
|
While it’s definitely worthwhile to know what void 0
does, in JavaScript, just use undefined
instead of void 0
. There are a few reasons why some might consider using this in your codebase, and I’d like to address a few of them here.
A very old version of JavaScript does allow for assigning to the identifier undefined
and that’s potentially an argument for why void 0
should be used instead of undefined
. Formally, the undefined
that we use in JavaScript is a property on the global object. In the browser, window.undefined
is similar to window.setTimeout
It is assignable in ES3, so you can change the value of undefined in old browsers. Understandably, everyone is worried about what happens if someone sneaks undefined = true
somewhere into your codebase, but there are other ways to deal with this problem.
undefined
problem, by making sure that trying to assign undefined is a no-op.undefined
.Depending on the developer’s programming language background, some devs feel like using void 0
expresses their intent clearer. They want to ‘void out’ a certain property in a similar way to how programming languages like C have a void
return type if they don’t return anything. Personally, I have a hard time understanding how the word “undefined” can be better expressed considering there’s only one actual regular dictionary definition of the word, but everyone has different tastes when it comes to these things.
With respect to this argument, I always err on the side of idiom and convention. JavaScript uses the identifier “undefined” for this meaning, void 0
and I think that it only sounds better if you’re coming from C/C++.
I know that I’m veering into difficult territory here, but I think that a part of this is showing off. We all have the urge to use everything new that we learn in a programming language and we all have egos. This particular case is even better because many JavaScript developers don’t know what void 0
does, or even that there is a void operator in JavaScript.
There’s also a tendency to think “Well, it’s part of JavaScript and if developers don’t know JavaScript, then why are they working here?” I understand that argument too, but I don’t think that it is always correct. I very unscientifically polled my twitter followers and the people who saw their retweets to figure out how many people actually knew what void 0
actually did. Only 65% of responses were correct, and taking into account cheating and the already JS-loving skew of my follwers, the actual percentage of JavaScript developers that know what void 0
does is much lower.
I know that thinking about what most JavaScript developers already understand can be a slippery slope, but here’s a litmus test for when this should matter. If there is a straightforward and semantically equivalent alternative to sprinkling your codebase with little known constructs/features, don’t use it. undefined
is that alternative whenever you think you need to use void 0
. Please don’t think that I’m telling you not to use anything that’s not known by most JS devs. Getters and setters, for example, are fine because those are incredibly powerful and have no more widely known alternative.
In case you are very worried about older browsers and malicious code, the way to deal with this is silo away somewhere in your code an actual unchangeable version of the value undefined
and use that where you need it. Define it once and use it everywhere instead of void 0
everwyhere.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
Please don’t use void 0
in code that other people need to read. If you’re worried about malicious code changing the value, there are alternatives. When trying to decide about what to include in your codebase, consider what percentage of people that work with you will understand what you’re doing and apply the litmus test of ‘Is there an equivalent alternative?’.
See you in the comments or on twitter!
In case you saw my twitter poll, here’s the result breakdown:
]]>
Coding at Scale, Angular State of Mind, CSS Animations and CucumberJS from John K. Paul on Vimeo.
Another month, even more awesome talks. This time, one main talk and three lightning talks. I'm very encouraged by seeing how many people out there want to come and share knowledge of what they know. As always, get in touch with me if you want to present either lightning talks, or main talks.
Coding at Scale: Tactics for Large-Scale Web Development by Mike Petrovich
Web applications and their development teams are becoming larger and more complex, which introduces a new set of challenges relating to developer communication and collaboration. How do large teams—sometimes in different parts of the world—coordinate effectively to build complex web applications?
In this talk, we'll examine many of the technical and logistical challenges faced by large and small-but-growing development teams alike, and we'll identify high-level patterns and implementation tactics successfully employed by development organizations.
Angular State of Mind: Intro to Angular.js by Rushaine McBean
Using a JavaScript framework or library today is a no brainer, the hard part is figuring out which to learn. In this talk, I’ll guide you through the key Angular concepts so your eventual mastery of AngularJS will lead to building scalable JS applications.
CSS Animations by Chris Sanders
The goal of my presentation is to discuss animations in CSS and why they are the preferred way to handle animations for your application. Firstly, I will give an overview of the syntax for creating animations. Then I will be presenting some common things people like to create with javascript and instead create them with CSS3 and CSS4.
CucumberJS by David Souther
David Souther shows how to write and run behavioral tests using Grunt, CucumberJS, and WebdriverJS. In a TDD fashion, we will go from an empty directory to a complete green functional test. Bring your Github, and look at each commit as we show how easy testing the full stack can be.
Precommit hooks are the most awesome and straightforward line of defense to add into your build system. They’re the earliest intervention we can use to make sure that bad code doesn’t make its way to production.
Precommit hooks are basically bash scripts that are run by the git executable before every single commit. This bash script can do whatever it needs to do to to verify that the commit is good, and should proceed. The script can exit with a non 0 exit code to signal to git that it shouldn’t allow the commit.
Here’s an example of a very basic pre-commit hook.
1 2 3 4 5 6 7 8 9 10 11 |
|
If you put this into your .git/hooks/pre-commit file in your local cloned copy of a git repo, it will run every time that you commit. You can do whatever you’d like in good-to-commit.js
. You can run a syntax validator, beautification validator, linter, all of your unit tests; it’s completely up to you and your team. The hard part is actually making sure that this precommit hook is installed on everyone’s machines.
Git repos can’t actually include hooks inside when they are cloned. This would be a security issue because precommit hooks are run as the current user. We wouldn’t want someone to add rm -rf ~
in a precommit hook. I get around this in a very sneaky way, and when I say sneaky, I mean awesome.
Chances are, you’re working on a project that uses some build tool to handle common tasks. I am using grunt at the moment, but you can come up with equivalent methods with whatever tool that you use. The key idea is to use your build tools before-every-single-task hook to your advantage.
Using grunt, whenever any task is run, the entire Gruntfile.js
is executed. In order to install precommit hooks for everyone who ever uses my project, I add this to the bottom of that file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
If you’re using rake or fabric, you can do very similar things. Every build tool will have a way to do it.
If you’re using npm and node, you can do all of this sneaky hook registration in postinstall hook or use M. Chase Whittemore’s node-hooks project.
I want you to go off and add precommit hooks to every project that you have. Let me know your experiences. I’ll talk to you in the comments and on twitter!
]]>Let’s say that you have an object in one part of your code that you’re working with.
1 2 3 4 5 6 |
|
And some other remote part of your code, that you had no idea existed, is modifying your object behind your back.
1 2 3 4 5 6 7 |
|
It’s straightforward to setup a watch in the Chrome debugger, so you could easily see that something was being changed, but the hard part is figuring out what piece of code is doing the changing. If you happen to sometimes live in a world with dozens and dozens of script tags on any given page, you’re pretty much SOL when it comes to grepping for the culprit.
This is where one very handy feature of ES5 comes in. ES5 defines many APIs
in JavaScript that we use regularly, like Array.prototype.indexOf
and JSON.parse
, but it also has fancier
pieces that like Object.defineProperty
. Object.defineProperty
allows you to setup accessors
called get
and set
for any property on an object in such a way that any code that uses
that property doesn’t have to know that a function is being run.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
This code changes very little external behavior of the pagingData object. All it does
is run these particular functions whenever pagingData.lastFetchedId
is set to a new value
or the value is being read. The exciting part is that now, you can add a breakpoint in
your debugger inside of the set
function that will then break whenever something sets that property.
Once you have this breakpoint, you can look through the
callstack
and you’ll be pointed directly at /src/path/youve/never/noticed.js
.
If you want an even easier way to get that breakpoint in there, you can just add a debugger statement directly into your new debugging code.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
As always, make sure that you remove this kind of debugging code before you commit, and don’t let it into production. I hope you have questions. I’ll see you in the comments!
]]>This is because JavaScript debuggers optimize the hell out of your code and will remove variables from the Lexical Environment of a function if they are unused. See my previous post about hoisting for more information about the Lexical Environment. If you use these variables anywhere in your source, suddently they will be accessible again inside of the scope of the new function.
You don’t even have to actually cause any side effects in order to make sure they’re not removed. Just including the identifier in the source of the function will keep their values around.
Hopefully this will prevent some rare moments of JavaScript inadequacy that might strike when debugging. Don’t worry, it’s not you, it’s the debugger.
]]>You can find a lot of good information about hoisting over at Nettuts and Ben Cherry’s blog. In case you haven’t come across it before, hoisting is how JavaScript developers describe the existence of certain references before they seem to be declared. For example:
1 2 3 4 5 6 7 |
|
In this example, it seems like “hello” should be logged to the console, because the logging statement comes before the variable statement. The variable declaration is hoisted to the top of the function, as if the first line of the example function was var foo;
What we call hoisting manifests itself as seemingly rewritten source files that move the declaration of variables and functions to the top of a function. I have described it as the JavaScript interpreter rewriting your source code before actually intepreting it. JavaScript behaves as if it changes this code:
1 2 3 4 5 6 7 |
|
Into this:
1 2 3 4 5 6 7 8 |
|
While this is a simple mental model to understand what is going on, this isn’t what happens at all. This feature of JavaScript semantics comes from a section in the ECMAScript specification called Entering Function Code and Declaration Binding Instantiation. Going through the specification is pretty tedious, but with careful reading, going through the background information, and my relatively more human readable translations of the spec, you should be able to understand what’s going on.
1 2 |
|
When a function is executed, an Execution Context is created. An Execution Context has a few different parts, but most importantly for this discussion, it contains a Lexical Environment. Conceptually, a Lexical Environment is an object that stores the bindings for identifiers that are used in the function. The Lexical environment is used to resolve identifiers when the function is actually executed.
As specified in section 10.4.3
- The following steps are performed when control enters the execution context for function code …
- Perform Declaration Binding Instantiation using the function code as described in 10.5.
Every time a function is called, before execution, go through the process of Declaration Binding Instantiation, as will be described next.
As specified in section 10.5
- For each FunctionDeclaration f in code, in source text order do:
- Let fn be the Identifier in FunctionDeclaration f …
- Let fo be the result of instantiating FunctionDeclaration f as described in Clause 13 …
- Call env’s SetMutableBinding concrete method passing fn, fo, and strict as the arguments.
Go through each of the function declarations within this function, instantiate those functions, and store the binding in the current Lexical Environment.
1 2 3 4 |
|
In this case, before the function is executed, the function declaration is added to the Lexical Environment with the identifier foo
and the value of an instantiated function object.
As specified in section 10.5
- For each VariableDeclaration and VariableDeclarationNoIn d in code, in source text order do:
- Let dn be the Identifier in d.
- Let varAlreadyDeclared be the result of calling env’s HasBinding concrete method passing dn as the argument …
- If varAlreadyDeclared is false, then:
- Call env’s CreateMutableBinding concrete method passing dn and configurableBindings as the arguments.
- Call env’s SetMutableBinding concrete method passing dn, undefined, and strict as the arguments.
Go through each of the variable declarations within this function and explicitly store the value undefined
in the Lexical Environment.
1 2 3 4 |
|
In this case, before the function is executed, the identifier foo
is added to the Lexical Environment with the value undefined
Now that I’ve read through that portion a few times, I can tell what is actually happenning. It takes about a dozen reads to start glossing over the boilerplate that’s needed for a rigorous specifiation.
To summarize, when a function is first entered, before any of the lines are actually executed, the execution environment goes through the function’s source and picks out some special cases to deal with. First, it goes through each FunctionDeclaration, and adds references for each of them to the environment record. Then, it goes through each VariableDeclaration and adds references to the value undefined
for each of them. Only after this process has completed, does the function body itself start to execute.
Although I have no intention to stop using the word hoisting, I am happy to have a pretty good understanding of why it’s not completely precise. The simple mental model of source rewriting allows developers to quickly visualize hoisting’s consequences on scope, but it misses the nuance that exists within the JavaScript interpreter. I had a fun time getting more acquainted with the process of reading a specification, and I hope that you try your mind at it too!
See you in the comments.
]]>I’ve seen an assortment of hacks to handle the complexity needed handle what code needs to be run after the Facebook JS SDK has been loaded onto a web page.
There are long lists of callbacks.
There are global booleans.
There are global function references, just to allow for two callbacks.
You don’t have to manually keep track of whether or not the facebook initalization callback has been called. You don’t have to manually handle overwriting the global function fbAsyncInit with references to all the other functions that you need to execute. Using a deferred object is the best way to address these issues, and can even keep you from adding any more global variables than needed.
The plan to tackle this problem is to create a deferred object that is fulfilled when the facebook SDK has finished loaded. Add callbacks wherever you need in your application to this deferred object. Once the SDK has loaded, all of the callbacks will fire, and any callbacks that are added after the SDK loads, will be fired immediately.
Once you have this in your application, anywhere after it, you can add functions that will be executed after the SDK has loaded by using window.fbAsyncInit.fbLoaded.done(callback);
Also, using this method, you can write code that waits on multiple asynchronous events using jQuery’s $.when()
. A quick google should have you setup with a lot of information on how to do that.
This post was slightly inspired by reading this post on Deferred method combinators, so you will probably find that a good read if you like this. Please let me know if you have any questions on twitter.
Tweet ]]>This works just fine most of the time. For browsers where pushState is available, everything works seamlessly with the URL. For browsers without pushState, again, everything works fine using the url fragment If you start sharing URLs between these two browsers, problems start cropping up and I recently came across this issue when working with the Facebook Like/Send buttons.
If you give a regular (non-fragment) URL to a browser that does not support pushState, Backbone, by itself, won’t be able to pick up on that and add the correct fragment. The same is true in the other direction as well when attempting to pass a fragment url to a modern browser. With a little bit of extra bootsrapping code, and giving up direct access to window.location, we can easily fix this problem.
We need to address both sides of this problem.
Since pushState is the standard, and in the future will be implemented across the board, I will use the regular URL as the canonical URL. When the application starts up, we need to jump through some hoops to get Backbone.history in the right state to support the canonical URL whether or not the browser supports pushState.
First, we need to start Backbone history, with whatever option is available for pushState. We pass the silent option as true, to ensure that no route handlers will fire. Then, if pushState is unsupported, we calculate the fragment from the current window.location and navigate to that fragment. If pushState is supported, we directly trigger the route handlers.
Now, on the other side, we need to ensure that URLs that are generated from the application are the canonical URL, regardless of pushState support in the browser doing the generation. This part is simple, as Backbone has done the heavy lifting for us. We just need to concatenate together pieces of the URL that are derived from Backbone.history.
As long as we only use this function to produce a URL that is shared between browsers, our other bit of code will handle setting up the application correctly. After some find and replace, you’re application should now be able to easily handle both modern browsers, and the browsers that we just have to grit our teeth and live with.
Since implementing this, I have no longer had any further problems between browsers with and without pushState. I’ve been able to pretend that everything supports pushState within the context of my Backbone app. I don’t think that it is a very large amount of code to add to your project to get it working, and I prize simple and short solutions. I hope that some of you out there find this useful in your application. Please let me know if you have any questions on on twitter
Tweet ]]>.index()
has always required me to look into the source to understand. I’m going to use this post to break down the four possible method signatures and possible use cases for myself, as well as anyone else who has every shared my confusion. I’m crossing my fingers that after I’m done, I’ll have this completely understood.
index()
with no argumentsWhen index()
is called with no arguments, it behaves mostly as you would expect. In the first example, it gives the zero-based index of #foo1
within it’s parent. since #foo1
is the second child of it’s parent, index()
returns 1.
The first potential confusion comes from the other examples in this fiddle. When index()
is called on a jquery object that contains more than one element, it does not calculate the index of the first element, as I would have expected, but rather calculates the index of the last element. This is equivalent to always calling $jqObject.last().index();
index()
with a string argumentWhen index()
is called with a string argument, there are two things to consider. The first is that jQuery will implicitly call .first()
on the original jQuery object. It will be finding the index of the first element, not the last element in this case. This inconsistency always makes me stop and think, so be careful with this one.
The second point is that jQuery is querying the entire dom using the passed in string selector and checking the index within that newly queried jQuery object. For example, when using .index("div")
in the last example, jQuery is selecting all of the divs in the document, and then searching for the index that contains the first element in the jquery object that .index()
is called on.
index()
with a jQuery object argumentIn this case, the first element of the jQuery object that is passed into .index()
is being checked against all of the elements in the original jQuery object. The original jQuery object, on the left side of .index()
, is array-like and is searched from index 0 through length - 1
for the first element of the argument jQuery object.
index()
with a DOM element argumentIn this case, the DOM element that is passed into .index()
is being checked against all of the elements in the original jQuery object. Once all of the other cases are understood, this should be the simplest case. It is very similar to the previous case, except since the DOM element is passed directly, it is not taken from a jQuery object container.
Hopefully this effort helps you as much as it has helped me. All in all, after reading the source for .index(), none of this is that complicated, but IMHO, not intuitive in some cases.
Tweet ]]>Evolution of a callback. How to use jQuery’s ajax deferreds.
]]>Since I use jsFiddle so much, I’m bound to find an annoyance or two though. I have found that I very often forget to switch away from the default js library, MooTools. It’s starting to become muscle memory that every time I open jsfiddle.net, I immediately change to jQuery, but it’s not quite there yet. Not only me, but many people asking for help on freenode have the same problem. About once a day someone posts a jsfiddle link to #jquery asking why something doesn’t work, and the code looks correct, except for the library chosen on the right side.
Until yesterday, I had always assumed that this default was alphabetic. I never paid attention to the fact that M was after D in the alphabet, so Dojo should have been first with jQuery somewhere in the middle. I found out that it’s actually ordered by Piotr’s preference, and he’s actually a MooTools core developer. Also, jsFiddle is actually built on top of MooTools and Django, so it makes sense to keep that as the default.
I just wrote a chrome userscript that can be used to change the default setting to jQuery. This can be easily modified to pick your favorite, or most commonly used library. I wrote it in MooTools out of honor and respect to Piotr, and because I figured it’d be fun to learn some MooTools.
If anyone would like to add a greasemonkey version of this, just let me know and I’ll add it to the gist.
Tweet ]]>this
work?Once a developer understands these three things in Javascript, they’re solidly on their way to rolling their eyes when they’re asked if they are a ninja at cocktail parties. It is these three concepts, in my opinion, that trip up most developers as they start to build applications larger than jQuery spaghetti.
There’s a much longer list in Javascript:The Good Parts and even more in our hearts. But I don’t think that these actually trip us up in daily development. When was the last time that you had a really hard time using the void
keyword or were foiled by type coersion? All of these issues, we learn once, and almost immediately understand. Either that, or developers don’t encounter these because the libraries that they use, and learn from, don’t use any (anti-)features either.
Not only are issues like with
and eval
not often encountered, but also, tools like jshint/lint, remove any accidental uses. Chances are, as long as a developer l/hints their code, they’ll be making a conscious decision to use the potentially dangerous parts of the language. It isn’t so with these three concepts. Without this understanding, it is very difficult to become productive in javascript.
Other than these three concepts, I’d argue that Javascript is no more “bad” than any other mainstream programming language, like PHP or even Java. There are gotchas in every language and at least JS has most of them described very well in The Good Parts and online documentation.
I could give you my own explanations here, but so many others have done a much better job than I could. There are a lot of options. If you still feel confused after reading one, just go on to the next one. If you still feel confused after reading every single one. Find me or someone else in ##javascript on freenode.
How doesthis
work?
this
is resolved The key here is to first learn about prototypical inheritance using Object.create
and then come back to trying to understand what the new operator is doing. Most tutorials mess this up, by explaining it the “classical” way first, and subsequently lead to much confusion about following the prototype chain and this and that.
Do you have more examples or blog posts? Let me know on twitter and I’ll add them to this list.
]]>jQuery Plugin Unit Testing from John K. Paul on Vimeo.
]]>