Reactive Streams and Variables

Within reactive systems, several different terms are often used for the basic entity of a time-varying value within a data flow. These terms can imply or carry slightly different things meanings:

Stream – A stream suggests a flow of values, where each value is significant. The term “stream” has primarily been used for IO operations where we want to have a model for data, such that we can operate on it, as it is received. However, applying this term to represent the state of object that may change, is usually inappropriate, as it implies the wrong type of concerns about stateful data. A stream is a great model for IO operations, but within reactive programming, it suggests “push” reactivity which tends to suffer from inefficiency when applied to modeling state.

“Event Bus” and “Signals” are other terms that may suggest a similar concept as a stream. A signal may be a little vaguer, and more open to interpretation.

Observable – This term is used to indicate that an object’s changes can be observed. This term accurately reflects what we can do with an object, but this also why the term falls short. Rather than describing what the object is and describing it’s inherent nature (that it can vary), we are only describing one particular way that we can interact with it. This is a short-sighted description because the nature of being time-varying may give opportunity to multiple ways of interacting with an object. If it is time-varying object, not only can we observe it, we might be able to map it, modify it, or discover meta-data about how it may vary.

Variable – I have been increasingly using this term to describe the basic entity of time-varying value, because “variable” accurately conveys that it is representing something that may change or vary. One of the key aspects of an efficient reactive system is being able to lazily respond to a group of changes, rather than each intermediate step (for example, if you are incrementally changing properties on an object). A variable conveys that a value may change, but that we are not necessarily interested in every intermediate value, like with a stream. The term “variable” fits well with more efficient “pull” reactivity.

Granular and Coarse Reactivity

One way to categorize reactive JavaScript programming paradigms is by granularity: fine/granular and coarse reactivity. There are pros and cons to each approach, and it is worth considering these approaches to decide on which may work better for your application, or even how you might integrate the two approaches.

Granular reactivity is an approach where each different pieces of time-varying state is stored in different streams, or variables. We can create many different reactivity variable to contain the various parts of application state that may change. These various variables are then used to explicitly control different parts of the UI that are dependent on this state. This approach is facilitated by libraries like BaconJS, RxJS, and Alkali.

Course reactivity takes a more bundled approach. All of the state that affects a given component is bundled into a single state object or container. When state changes, rather than discerning the separate aspects of each state change, an entire component is re-rendered. By itself, this would be tremendously inefficient, as you would have a large scale DOM rewrite on every state change. Consequently, this approach is almost always combined with a diffing mechanism, which involves virtual DOM (an in-memory copy of a DOM), that can compare the results of the rendered output to compute exactly what DOM elements need to be changed, and minimize DOM modifications. This approach is most famously implemented in the React library.

Again, there are certainly advantages and disadvantages of each approach. Course reactivity has gained tremendous popularity because it facilitates the creation of reactive UIs, while still allowing developers to use familiar imperative style coding. This style, sometimes combined with syntactic additions (JSX), provides a terse syntax for writing components, with familiar style and elements.

There are distinct disadvantages though, as well. By avoiding any explicit connection between indvidual components of state, and their resulting output, reactive changes result in large scale rewrites of component output. Again, to cope with this, coarse reactivity employs diffing algorithms to precisely determine what has changed and what hasn’t. However, this comes with a price. In order for the diffing to work properly it must maintain a virtual copy of the DOM so it can compare changes before and after reactive changes to determine what DOM elements to actually update. This virtual copy consumes a substantial amount of memory, often require several times as much memory as the corresponding DOM alone.

Memory consumption is often trivialized (“memory is cheap”), and easily ignored since it doesn’t usually affect isolated performance tests. However, high memory usage is a negative externality, there are many layers of caching that it slows down, it incrementally drags down not only itself, but other code in the system. This is similar to real-world externalities like pollution; in isolation it is not worth dealing with, but collectively, when everything is consuming high amounts of memory, the effects become more visible.

The imperative nature of coarse reactivity also complicates efforts to create two-way bindings. Because the relationship between data sources and their output is obscured by the imperative process, there is no natural connection that relates input changes back to their sources. Of course, there are ways to set up bindings back to data, but this doesn’t come for free.

On the other hand, using a more pure, granular approach to reactivity, we can overcome these issues. This has challenges as well. The most obvious drawback of granular reactivity it requires a more functional approach, rather than the familiar imperative approach. However, this can also be considered a benefit. Granular reactivity pushes developers towards using more functional paradigms. Functional programming has substantial benefits. While these benefits can be explored in far more depth, briefly it leads to a more direct way of describing “what” is happening in an application, rather than working in terms of “how” a sequence of state changes will accomplish a task. This is particularly important as we read and maintain code. While we often think about coding in terms of writing new code, most of us spend much more time reading and updating existing. And being able comprehend code without have to replay state sequences is a huge boost for comprehension.

And granular reactivity certainly provides a more memory efficient approach. Only those elements that are bound to data or variables have extra memory, and only the memory necessary to track their relationship. This means that static portions of an application, without any data binding or backing variable, do not require any extra memory elements (in a virtual DOM, or otherwise). In my work with a highly memory intensive application, this has been a tremendous benefit.

Granular reactivity also creates direct relationships between state and UI component values. This greatly simplifies two-way data bindings, because UI inputs can be directly mapped back to their source. This two-way connection can automatically be provided by a good granular reactive system.

Finally, granular reactivity can more easily be incrementally (and minimally) added to existing components and UI applications. Rather than having to convert whole components to using virtual DOM output, individual reactive elements can be incrementally added to components with very minimal changes.

While these are contrasting approaches, it is certainly feasible to combine them in a a single application. One can use coarse state components (with a virtual DOM) for portions of the application that are may not incur as much memory overhead (perhaps due to low frequency of usage), while areas with more heavy usage, or that will benefit from more automated two-bindings, can use granular reactivity approaches. Either way, it is important recognize the pros and cons of these approaches. While coarse grain may be quick and easy, deeper concerns may suggest the more efficient and functional approach of granular reactivity.

Rendering Efficiently with Two Phase UI Updates

A responsive user interface is a critical aspect of a good user experience, and maintaining responsiveness with increasingly complex applications, with growing data can be very challenging. In this post, I want to look at how to follow the practice of using two distinct UI update phases to achieve optimal performance.

Several months ago I started a new position with Doctor Evidence, and one of our primary challenges is dealing with very large sets of clinical data and providing a responsive UI. This has given me some great opportunities and experience with working on performance with complex UI requirements. I have also been able to build out a library, Alkali, that facilitates this approach.

Two phase UI updates basically means that we have one phase that is event-triggered and focused on updating the canonical state of our application, and determining which dependent UI dependencies as invalid, and in need of rerendering. We then have a second phase that is focused on rerendering any UI elements that have been marked as invalid, and computing any derived values in the process.
two phase rendering

State Update Phase

The state update phase is a response to application events. This may be triggered by user interaction like clicks, incoming data from a server, or any other type of event. In response to this, the primary concern, as the name suggests, is to update the canonical state of the application. This state may exist at various levels, it could be persisted data models that are synchronized and connected to a server, or it could be higher level “view-models” that are often more explicitly called out in MVVM design and may reflect more transient state. However, the important concept is that we focus on only updating “canonical” state, that is, state information that is not derived from other sources. Derived data should generally not be directly updated at this stage.

The other action that should take place during the state update phase is to invalidate any components that are dependent on the state that is updated. Naturally, the state or data model components should not be responsible for knowledge of which and how to update downstream components, but rather should allow for dependent components to able to register for when a state update has occurred so they can self-invalidate. But the key goal of the invalidation part of the state update phase is to not actually re-render anything immediately, but simply mark what needs to be rendering for the next phase.

This purpose of this approach is that complicated or repetitive state changes will never trigger excessive rendering updates. During state updates, downstream dependent components can be marked as invalid over and over and it will still only need a single rendering in the next phase.

The invalidation should also trigger or queue the rendering actions for the next phase. Fortunately, browsers provide the perfect API for doing exactly this. All rendering should be queued using requestAnimationFrame. This API is already optimized to perform the rendering at the right time, right before the browser lays out and paints, and deferred when necessary to avoid excessive render frequency in the case of rapid events. Fortuneately, much of the complexity of the right timing for rendering is handled by the browser with this API. We just need to make sure our components only queue a single rendering in response to invalidation (multiple invalidations in one phase shouldn’t queue up multiple renderings).

Rendering Phase

The next phase is the rendering phase. At this point, there should be a set of rendering actions that have been queued. If you have used requestAnimationFrame, this phase will be initiated by the browser calling the registered callbacks. As visual components render themselves, they retrieve data from data models, or data model derivatives that are computed. It is at this point that any computed or derived data, that had been invalidated by the previous phase, should now be calculated. This can be performed lazily, in response to requests from the UI components. On-demand computations can avoid any unnecessary calculations, if any computed data is no longer needed.

The key focus in this phase is that we are rendering everything has been invalidated, and computing any data along the way. We must ensure that this phase avoids any updates to canonical data sources during this phase. By separating these phases and avoiding any state updates during rendering, we can eliminate any echo or bouncing of state updates that can slow a UI down or create difficult to debug infinite loops. Batching DOM updates into a single event turn is also an important optimization for avoiding unnecessary changes browser level layout and repaint.

In fact, this two phase system is actually very similar to the way browsers themselves work. Browsers process events, allowing JavaScript to update the DOM model, which is effectively the browser’s data model, and then once the JS phase (state updating) is complete, it enters into its own rendering phase where, based on changed DOM nodes, any relayout or repainting is performed. This allows a browser to function very efficiently for a wide variety of web applications.

An additional optimization is to check that invalidated UI components are actually visible before rerendering them. If a UI component with changed dependent values is not visible, we can simply mark it as needing to be rerendered, and wait until if and when it is visible to do rendering. This can also avoid unnecessary rendering updates.

Alkali

This approach is generally applicable, and can be used with various libaries and applications. However, I have been building, and using a reactive package, alkali, which is specifically designed to facilitate this efficient two phase model. In alkali, there is an API for a Variable that can hold state information, and dependent variables can be registered with it. One can easily map one or multiple variable to another, and alkali creates the dependency links. When a source variable is updated, the downstream dependent variables and components are then automatically invalidated. Alkali handles the invalidation aspect of state updates, as part of its reactive capabilities.

Alkali also includes an Updater class. This is a class that can be used by UI endpoints or components that are dependent on these variables (whether they are canonical or computed). An Updater is simply given a variable, element, and code to perform a rendering when necessary. The Updater will register itself has a dependency of the given variable, and then will be invalidated if any upstream changes are made, and queue its re-rendering via requestAnimationFrame. The rendering code will then be called, as needed, in the proper rendering phase, at the right time.

With alkali, computed variables are also lazy. They will only be computed as needed, on-demand by downstream UI Updaters. They are also cached, so they are only computed one time, to avoid any unnecessary repetition of computations.

As we have tackled large scale data models in a complex UI, the two phase approach was helped us to manage complex state changes, and provide a fast, responsive efficient UI rendering of our data model as state rapidly changes.

Reactivity and Caching

Functionally reactive programming techniques have seen increasing usage lately, as a means of simplifying code, particularly in situations where views can react to model data. Reactive programming can replace a lot of error-prone imperative code dedicated to responding to data changes, allowing synchronization of views and data to be written more precisely, without redundancy of intent.

As I have been working on reactive programming in xstyle, I believe that reactivity is actually a natural extension of what I like to call resource oriented programming. Resource oriented programming is that act of applying RESTful concepts to programming, so that resources can be treated with a uniform interface. Reactivity fits well within this concept because it is intended to provide a uniform interface for responding to changes in resources, using the observer pattern. With this uniform approach, bindings can easily and naturally be written.

Viewed from this perspective, REST suggests a surprisingly insightful mechanism for reactivity. A key mechanism in REST is caching, the semantics of resource interfaces are driven by exposing how resources can be cached. This resource-oriented approach can guide reactivity, but first, let’s review the mechanism of reactivity.

Generally, reactivity is built on an interface for adding a listener to be notified when a particular value changes, often called the observer pattern. This is sometimes compared to a continuous promise. A promise is something that be resolved and any listeners notified. A reactive, observable value is similar, except that the rather having a single resolution, it can continue to resolve to different values indefinitely. Alternately, reactivity is sometimes compared to a stream, with new values being streamed to a listener as they come in (there is some impedance mismatch with streams though, since reactivity differs in that listeners may not care about the entire sequence, only the last value).

Perhaps the most obvious strategy for these change notifications in the observer pattern, is to include the changed value in the notification. With this approach, when a change occurs, the new values are propagated out to any listeners, as part of the notification. A consumer would retrieve an initial value, and then listen for changes and respond to them.

The REST-inspired resource-oriented approach still uses the observer pattern to alert consumers of changes. However, with the REST approach, the notifications and flow of data are delineated. Change events function as simply invalidation notifications, and the flow of data always takes place as a separate request from the consumer to the source.

Invalidation notifications complements caching. Caching of values (particularly computed values) can be employed to ensure that no unnecessary computations take place, and any entity can perform caching safely with the knowledge that we will be notified if any of the source data is invalidated.

The REST approach of keeping notifications (only responsible for indicating a change) separate from a cacheable data request mechanism, rather than using notifications as the data delivery mechanism has several important advantages:

  • Consumers always follow the same code path to get data and act on it. Regardless of whether consumption (like rendering data) is on the initial retrieval, or whether it is triggered from a change notification, the consumer always requests and retrieves data in the same, consistent way.
  • Consumers have control over whether or not to actually request data in response to a notification. There can be plenty of situations where a consumer may temporarily or permanently not need to access new data when notified of its presence. For example, a component that is currently hidden (think of an object in pane of an unselected tab), may not need to force all the computations necessary to retrieve the data, if it doesn’t actually need to render it. The notification of a change may be important to be aware that it will need to rerender if and when it is visible again, but it could potentially ignore multiple data changes while hidden, by simply not retrieving data until necessary. This approach affords maximum laziness in applications.
  • When consumers retrieve data, it produces more coherent call stacks. Call stacks that consist of data sources calling intermediaries calling consumer components can be incredibly confusing to track. While data sources can still be at the bottom of the stack with their notifications, the actual data flow will always be ordered by consumers calling intermediaries calling data sources, for consistent, intuitive debugging.

Interestingly, EcmaScript’s new Object.observe functionality, which provides native object notifications, actually follows this same approach. The notification events that are triggered by property changes, are only invalidation notifications, that do not include the changed value. This aligns with the resource-oriented approach I have described, and encourages this strategy.

There are certainly situations where this approach is not appropriate. In particular, this isn’t appropriate when there are large latency increases associated with separating notification and retrieval operations (for example, if they necessitate extra network requests). However, within a single process, I believe this resource-oriented cache-and-invalidate approach can yield cleaner and more efficient code, and is worth considering as you approach reactive programming.

A Few Tips on Statistics

Statistics are a valuable tool, yet frequently misused and distorted. I have often wished that statistics was a mandatory high school class, since their usage is so critical to understanding the world around us. I thought I would try to share a few tips for better understanding statistics.

First, while statistics are often misused, they truly represent our most valuable tool for objectively assessing and quantifying issues and the nature of the world beyond our reach. The alternative is to rely on anecdotal evidence, stories that we have or others have experienced. While stories can be a great way of helping us to understand how situations work and connecting with others, they tend to lack of quantifiable objectivity. First, stories can very easily be non-representative of the most common realities. We, or those we listen too, can easily have experienced unique situations that do not actually correspond with normalcy. If fact, the stories we tend to enjoy most are those that are most unusual. So the stories that are shared, whether it be orally or through social or news media are often the ones that are furthest from an accurate representation of typical reality. For example, the deaths we were hear the most about on the news are the ones that are most unusual, and least likely to happen, precisely because those are the ones that are most interesting, the causes that in reality take the most lives, are banal.

Even though statistics are our most objective tool of describing the world in quantifiable ways, they are plenty of ways we can be deceived by them. Let’s look at a couple basic steps for filtering statistics.

The first type of statistic we often see is those that simply describe some issue. The statistics are dealing with “what”, but not “how” or “why”. These are the easiest statistic to understand since they simply describing a measurable quantity. The first thing to consider in evaluating these statistics is understanding context. Many statistics can be completely overwhelming, and difficult to comprehend. And when statistics are difficult to comprehend it is actually a good intuition that we need more context.

For example, the US national debt is currently about 17 trillion dollars. The huge size of national debt is often talked about, but in reality this number alone lacks context. Trillions of anything are simply incomprehensible for just about any human. So how can we bring context? We could consider this at a personal level, the national debt per person is about $55,000. This is little easier to understand (less than many mortgages, but much higher than a responsible level of consumer debt). We could also consider a statistic from a history perspective (the debt as a percentage of GDP is higher than most times in US history, but it is lower than certain times like during WWII). We could also compare the US to other countries (the US is higher than most, but lower than several, for example, Japan’s is nearly twice as high). This isn’t to make any particular claims about the US national debt, but it demonstrates how we can try to find some context for statistics. The US debt is high, but without this context, numbers like the national debt are conceptually meaningless, and it is important that we find things to compare against to provide meaning.

The next thing to beware of in approaching these statistics that often something is being implied. While using statistics to understand the “what”s of the world, we often want to take the next step to understand “why” things are the way they are, and “how” we can affect them. We want to understand cause and effect.

Probably the most common saying among those who deal with research and statistics is “correlation does not imply causation”. This may sound confusing, but it is a fairly simple warning. While statistics may show that two things are related, we need to be very cautious about assuming that one thing caused another thing.

Let’s consider an example. One could say that you are far more likely to die when you are in the hospital than when you are at the grocery store. Of course this is true, but does this therefore imply that hospitals are more dangerous than grocery stores? Of course not. Hospitals aren’t the cause of the deaths. It is the illnesses that lead both to deaths and to the hospitals that is the cause. We can’t assume that just because hospitals and deaths are related that one is the cause of the other.

But determining cause and effect, so that we can learn what factors lead to what results is still extremely important, and statistics can be used to assess cause and effect. So how can we accurately determine when a statistics indicates a true cause? A statistical relationship can demonstrate cause and effect if we can prove that no other factor is the real cause. Sometimes we can logically establish this. Other times we can setup experiments. In medicine (and increasingly in other fields, like microeconomics), trials are set up where different people are given or not given some intervention (like a drug), and then you can measure the effects by comparing the two groups. Usually the recipients are randomly chosen. The advantage of this approach, is that by randomly choosing the cause, we can be certain that no other causes are in play. For this reason, randomized control trials are used extensively to accurately assess drugs, and can be very valuable for making accurate assessment of other types of interventions.

Outside of controlled experiments, it can be very difficult to pin down sources, because it is very difficult to be sure that no other factors are influencing results. For example, determining the effect of different educational institutes is remarkably hard because so many factors like affluence affect both the education outcome and the type of institution that the parent would choose, and determining whether the institute was the deciding factor or any of the other vast number of ways that wealth can be used to benefit the education, is again, very difficult.

When we can’t control the factors, the primary remaining technique is regression analysis. However, this is where this casual discussion of statistics quickly turns much more advanced. Suffice to say that to truly demonstrate cause and effect with real world uncontrolled statistics typically requires complicated analysis.

To summarize the warning here, if someone is trying to convince you that one factor is the cause of another issue through statistics, unless the cause is logically provable, comes from a controlled experiment, or involves regression analysis, it is probably suspicious.

Statistics are an invaluable tool for understanding our world, making informed decisions how we prioritize our time and money. But in order understand them, remember to seek to understand them by putting statistics in context, and be cautious of drawing cause and effect relationships. Hopefully these tips help make sense of statistics.

The Engineer’s Charities

There has been a growing movement away from disinterested charity to engaging in rational and effective altruism. I wanted to write this post, because I believe that we, as engineers, should be on the forefront of this movement. This post is not an attempt to persuade engineers that they should give, although generally most engineers are reasonably well-compensated, and have the opportunity to do tremendous good through the generosity. But here I wanted to consider how and where we can give in a way that aligns with the values and principles of engineering.

A good engineer is a peculiar type of person. A good engineer is one who can solve problems and designs solutions by applying a logical, rational thinking, and using careful and clear analysis. Engineers are often not origination of research, but must analyze research to intelligently determine how the research of others applies to solving real problems. Engineers must not operate solely in the abstract and theoretical, as they constantly must solve problems of the real world, and yet they still must integrate abstract theory to create designs that will work beyond just surface level needs, that having enduring and broad application now and later.

Certainly one clear distinction between a poor engineer and a good one, is that the former is content to accept whatever tool, material, or design that is put before him first, perhaps because it is the most popular, or suggested by another, whereas a good engineer is a meticulous critic, knowing how to carefully choose between the bad, good, better, and best option. A good engineer can clearly point to the reasons why a particular component or design is better, and precisely why and how much it will yield better results or performance.

There is no reason we shouldn’t apply this same level analysis and critique to our generosity and giving. Unfortunately, this is not how the world of charity generally works. Money is typically raised by making emotional appeals, and leveraging networks of connections to attract donations. Most donations occur, not as the result of careful analysis, seeking to find the best and most efficient opportunity to deliver a desired good for others, but usually as an incidental response to some plea. Consider if we applied this approach to one of our projects, rather than analyzing our options and rationally choosing the optimum components or design for our project; we just went with whatever advertisement or buzzword we heard most recently. This would represent the epitome of lazy engineering.

As I mentioned before, there has been increasing efforts towards engaging in charity, not just as a capricious way of getting rid of money and creating some temporary emotional satisfaction, but approaching altruism with a clear goal of helping the most people in the greatest way possible with the funds we have available to us. This movement towards effective altruism, points toward the same relentless analysis of different possible mechanisms and the results they achieve, just as we engineers demand in the projects, where we spend most of our energy. This approach demands real research of interventions and the evidence of their efficacy over emotional appeals and marketing.

With this, I want to introduce a couple charities and meta-charities that epitomize effective, rational altruism:

A Charity Based on Research

Innovations for Poverty Action (IPA) is a charity founded by some of the best researchers in development economics. This charity has a foundation of research work, focused first on scientifically evaluating different possible interventions to objectively determine the most promising interventions. IPA has been at the forefront of the movement to make extensive use of randomizing control trials (RCT). This form of research has been incredibly promising since it facilitates clear tests of causation. In non-randomized trials, controlling for various factors and establishing causation is incredibly difficult and prone to biases, whereas RCTs give an irrefutable cause (randomized selection) for the basis of tests.

From the results of this research, IPA has developed programs to scale up efforts in proven and promising programs that have been shown to be effective and efficient. IPA has branched off a separate organization, Evidenced Action for the funding and development of these efforts.

Givewell – Givewell is a charity evaluation organization that has begun to receive significant attention for their extremely detailed analysis of a select few charities, and how many lives are saved or impacted for each dollar spent. While their level of analysis is probably not unprecedented (it probably happens within Gate foundation, USAID, and perhaps some other foundations or agencies), but their transparency almost certainly is. They have done very careful and detailed research of top, effective charities. You can visit Givewell’s site for more information, but their current top-rated charity is the Against Malaria Foundation (AMF). From their research, it is estimated that approximately one live is saved for every $2000 donated, a remarkable return on investment, with a high level of research behind it.

For some, even engineers, this might sound excessively cool and calculating. Isn’t giving supposed to be more than just another engineering project, something emotional and/or spiritually satisfying? I don’t believe rational giving is mutually exclusive with these other forms of satisfaction. We often apply the greatest level of rationality and thought to the things we care most about. And spiritually, at least for me, as a Christ-follower, called to care for others, putting the actual result of my giving in terms of impact on other’s lives as the highest goal, is to me the ultimate fulfillment of this spiritual pursuit. Likewise, emotionally, realizing that I have tangibly and realistically saved dozens of real, human lives, probably children who were loved as much as I love my children, is tremendously emotionally compelling. My challenge to engineers: approach giving just like you do engineering, with your best, most rational, careful analysis and concern. The impact you can make is immense.

Closure Based Instance Binding

In JavaScript, we frequently write functions where we wish to access |this| from another (typically outer) function. The |this| keyword does not inherit from outer functions like other variables, each function has its own |this| binding. For example, suppose we have a widget that needs trigger some code in response to a button being pressed. We might try:

buildRendering: function(){
  this.domNode.appendChild(document.createElement("button")).onclick = function(){
    if(this.ready){
      this.showMessage();
    }
  };
},
showMessage: function(message){
...

However, this code won’t work as intended, because |this| in the inner function refers to something different (the target DOM node in this case) than it did in the outer method. Often times developers use a bind() or hitch() function to force a function to have |this| be the same as an outer function, fixing this code as such (note this is only available in ES5, most JS libraries provide a function to do something similar):

buildRendering: function(){
  this.domNode.appendChild(document.createElement("button")).onclick = (function(){
    if(this.ready){
      this.showMessage();
    }
  }).bind(this);
},

Another technique used by developers, is closure based instance binding, where we assign |this| to a variable that can then be accessed by the inner function:

buildRendering: function(){
  var widget = this;
  this.domNode.appendChild(document.createElement("button")).onclick = function(){
    if(widget.ready){
      widget.showMessage();
    }
  };
},

Both techniques have their place, but I almost always use the latter. I wanted to give a few reasons:

  • Philosophical – JavaScript is foremost a closure-based language, with method dispatching as additional sugared mechanism built on the foundation property access and call operators. This is in contrast to other languages that place object oriented method dispatching as foundational with closures as something built on top of it. Every function in JavaScript has a closure on some scope (even if it is the global scope), and using closures to maintain references to different instances through different function levels is the essence of idiomatic JavaScript.
  • Clarity and readability – “this” is a very generic term, and in layered functions, its ambiguity can be make it difficult to determine what instance is really being referred to. Setting |this| to a variable lends itself to giving the instance a clearly articulated variable name that describes exactly what instance is being referred to, and can easily be understood by later developers without having to trace through bindings to see where |this| comes from. In the example above, in the inner function, it is clear that the are referring to the widget instance, as opposed to the node or any other instance that might get assigned to |this|.
  • Flexibility – Using closure based instance binding, we can easily reference a parent function’s instance without losing the ability to reference the inner function’s |this|. In the example above, we can access the widget instance, but we can also still use |this| to reference the target node of the event.
  • Code Size – After minification setting up access to the parent |this| instance requires (approximately, it can vary on minification techniques, this assumes variables are reduced to 2 bytes):
    function.bind() – setup: 9, each reference: 4
    closure-based – setup: 8, each reference: 2
    Some functions like forEach, already have an argument to define |this| for the callback function, in which case they have a shorter binding setup.
  • Reduced Dependencies – Function binding entails using ES5’s bind(), which reduces compatibility to only ES5 environments, or requires reliance on third-party library, decreasing your code’s portability and flexibility in being used in different contexts. Closure-based bindings rely on nothing but JavaScript mechanisms that have been supported by all VMs forever.
  • Performance – There are two phases of the performance to test, that varies on our mechanism of binding. First is the setup of the function. This can be dominant if the function is not called very often. Second is the actual function invocation. Obviously this is more important if the function is called frequently. From these tests you can see the clear, and often immense, performance advantage of closure based binding:
    Setup: http://jsperf.com/bind-vs-closure-setup
    Invocation: http://jsperf.com/bind-vs-closure-invocation
  • Of course there are there are certainly situations where it is appropriate to bind. If you are going to provide a method directly as an event listener, binding is often appropriate. However, it is worth noting that I have seen plenty of instances of creating a method specifically for an event listener, and when an anonymous inline function (with closure-based instance binding) could easily have been used.

UNFPA

It has been estimated that at a cost of about $1.7 billion a year, modern contraceptive use prevents 105 million induced abortions and another 3.6 million infant and mother deaths (in 2004) per year, with the United Nations Population Fund (UNFPA) being a main provider of contraceptives in developing countries where access would otherwise be limited. If you believe that life begins at conception, than the UNFPA has almost certainly prevented more deaths than any other organization on earth, and with more efficiency (between $7 and $177 per death prevented) than any other organization as well. The tragic irony is that funding cuts to the program in this decade have largely been driven by pro-life organizations, the very ones that hold this view of life (due to misconceptions that the UNFPA advocates abortion, which continues). This isn’t intended to be a slant against the well-meaning intentions of these organizations, but a lesson that to honestly pursue a cause, we must humbly learn from empirical feedback what are effective solutions rather than simply pushing forward with our ideological assumptions.

You can give to UNFPA here.

Capitalism is a Tool

Free market capitalism is awesome. But capitalism is a tool. Arguing about whether it is universally better than some other tool (socialism, communism, restricted capitalism, etc.) is as foolish as arguing about whether a hammer is a universally better than a screwdriver. The more relevant question is when it is appropriate (or in what form). Capitalism works fantastically well with a few conditions that create incentives that drive all parties to make decisions that benefit everyone:
1. Consumers have the ability to make a rational choice about what to consume.
2. Consumers have the freedom to choose between different products or services, creating competition that drives producers towards better value.
3. Demand drives producers to increase supply, benefiting consumers.
4. Consumers inability to afford a product/service does not represent an ethical violation.

When these conditions are present, capitalism generally contributes to a flourishing and just economies. For example with cars, consumers can easily research and test drive car to make a rational choice, they can choose between different makers and models, the demand pushes auto makers towards building more cars and increasing availability, and in general their is no basic inviolable human right to having a vehicle. This is a win-win situation, incentives benefit both producers and consumers in an relatively fair and accurate way that generally generates high and robust growth and production. Capitalism was integral in the industrial revolution were many growing sectors matched these conditions and showcased the benefits of capitalism in spectacular fashion.

However, if any of these conditions are missing, the effects can be negative. When a condition is missing, an otherwise free market may actually create perverse incentives rather than beneficial incentives. For example, when a monopoly occurs, consumers no longer have freedom, thus anti-competition laws are enforced. While many economic sectors fulfill these conditions, it is naive to assume that all do.

Education is sector where there is general consensus that it is ethically unacceptable to deny some children education due to economic circumstances (especially since such circumstances are usually beyond the control of the children). While a free market still exists with private education, allowing consumers to choose potentially superior services, this sector is supplemented with a public social structure because the ethical condition of capitalism is not fully realized. It is not appropriate to apply it as the only tool, thereby denying some children access to education.

Health care is a sector where consumers intrinsically fall short of rational and free choice. Some services are rendered in distress or emergency where there is little opportunity for research or alternatives. Other choices are made by specialists where the buyer isn’t involved (there isn’t a budget/value force towards low-price) or doesn’t understand the options enough to participate. Decisions also involved short-term vs long-term benefits where humans are notoriously poor at make rational decisions (we tend to choose short-term gains with little concern for long-term). Also insurance plays a buffering role further removing the consumer from direct value-based decisions. With these conditions for capitalism missing, perverse incentives are present, motivating increasing health care costs rather than reduction in costs. Predominantly privatized health care has struggled to produce good value (compared to government provided universal health care), not because capitalism doesn’t work, but because the conditions for capitalism simply do not exist (and it’s unlikely that they can be forced to exist) in health care. This is blatantly evident with the US health care system, one of the few developed countries still languishing with a privatized model. The US spends vastly more than any other country on health care, costs are rapidly increasing (since incentives are driving prices up instead of down), to the point where statisticians use it as an example of statistical outlier. In fact the US government spends more on our privatized health care (where the majority of expenditures are private), than do countries with universal health care (where the goverment covers the majority of costs), all while we have shorter life spans and higher mortality rates than most other developed countries. Finally, the health care also fails to meet the ethical condition. Denying basic health services due economic circumstances is a moral failure, even if the system was working efficiently.

Computers and the Internet are also creating situations where the demand-driven supply condition is started to evaporate in certain sectors. An example of this is the music industry. Duplication of music (and other products like software) has effectively reached zero-effort. This means that supply can be almost infinite as soon as music is recorded, as it can be distributed with no virtually effort from the producer. Consequently, perverse incentives are present, leading the music industry to create arbitrary constraints (DRM). With the absence of this condition for capitalism, the music industry is incentivized to create demand by reducing/constraining supply (a negative utility to society), rather than working to increase supply (benefitting society). The solution probably is not socialize the music industry, but we should acknowledge the suboptimal efficiency, and recognize that alternate post-capitalistic models may need to be exercised (and perhaps are already being used, many artists have opted for more of a gift economic approach ).

Of course there will be legitimate disagreements on the degree and type of different economic models needed in different sectors, but we should at a least start with the right question. We must start with a proper perspective of economic models as tools, not sacred institutions, that can be applied in differing degrees in different situations, rather than falling for simplistic overarching false dichotomies. Rather than leaning on our assumptions, the question of what tool to use should be driven by pragmatic look at works for the situation.

Gratefulness and Gas

A gallon of gas contains 130 million Joules of energy, equivalent to the energy of lifting a man from sea level to the top of Mt Everest about 15 times, or 31,000 Calories, and equivalent to about two weeks worth of food. Being able to purchase something with this much energy for less than $30 is incredible and something that we should be extremely grateful for, and we may not always enjoy.

The price of gas is based on simple global supply and demand. Gas comes from oil, which is a fungible resource readily interchangeable on the global market. Supply is dominated by OPEC. The US portion of the supply is very small, it currently produces around 8% of global oil, and holds only 1.5-2% of global reserves (there are some debates on extents of provable reserves, so it is possible we have up to a percentage point higher). On the otherhand, the US provides a significant part of the demand (25% of global demand), thus the US’s primary influence on prices is in demand. Economic recession decreases activity thus decreasing demand and prices, economic growth increases prices. If you don’t like prices, just like any other commodity, the recourse is it to not buy. If you are a buyer you are a participant in demand.

There are a few other contributions to gas prices that either small, regional, short-term. One is the gas tax. George H.W. Bush raised the federal gas tax to 18.4 cents per gallon in 1993, and it has remained at the level ever since. This tax represented about 14% of the price of gas under (the first) Bush. With current gas prices, this federal tax now only constitutes about 5% of the price of gas. The gas tax under the Obama administration is also at the lowest amount in constant dollars in over two decades. This is also one of the smartest taxes that we have since it not only improves roads, but incentivizes lower consumption better than any MPG mandates can (I would love to see it increased). Oil futures and trading affects price, but this too has minimal long-term impacts, and can help cope with supply fluctuations. Like other global products, inflation and exchange rates affect the current price, but the last few years have seen lower than average inflation and a general strengthening of the dollar against other currencies. Finally, there are different state and local taxes, and different regional requirements on the quality gas, and different transportation costs due to proximity to refineries and sources of production, and these are the primary drivers of the differences in gas prices in different locales, but this isn’t related to the overall country gas price trends. For each of these factors the federal government has little control or avoided any price increasing policy changes.

To grumble about prices and make it political is generally either ignorant (the principle way the US reduces prices is through economic recession), or manipulative. Drilling or pipelining more has a negligible affect on global supply. And the US is depleting it’s proven reserves at least 4-5 faster than most countries in the world, drilling faster now as way to avoid foreign dependence on oil just shifts even greatere dependence to our next generation. As one concerned for children’s future, I don’t want to dump them further them into the dead end of oil addiction.

Specifically the supposed benefits of the Keystone pipeline towards lowering prices are particularly vacuous, pipeline don’t produce oil, they just move it around, and as many have pointed out, it will move oil to the gulf for easier export, making it likely to actually slightly increase gas prices in the midwest. The real question in regards to Keystone is simply the ethics of increasing efficiency of tar sand based oil transportation versus the ecological impact.

In reality, increased gas prices are basically due to increased economic activity and recovery (the dominant supply factors are mostly out of our control), increased spring/summer driving, and supply concerns with possible conflict in Iran. The primary effect America has had is in economic recovery (and possibly we’ve played a part in some short-term supply concerns by threatening military action against Iran). You can’t eat your recovery and have your cheap oil too.

When it comes to amazing amount of convenient energy in we can buy in a gallon gas, gratefulness is a better attitude than complaining and reduced usage is a better response than political manipulation.