A Broken Heart for Valentine’s Day

I wrote a little overview of our Valentine’s day gala we put on for Love146, to fight human trafficking, and submitted it to our local paper (hence the faux journalistic style, and Utah specific comments). They never published it, so here it is:

This last Tuesday, a unique Valentine’s party took place. While couples across America were packing restaurants to celebrate their relationships, about 30 couples gathered in a small church in east Sandy, with a little different focus. They also came together because of love, and also enjoyed a catered dinner and live music, but the gathering was not the typical Valentine’s evening of romance. It was focused on the heart breaking realities of human trafficking. This was an awareness and fund-raising effort for an organization called Love146, to defend, protect, and restore children caught in the tragedy of the sex slavery industry.

The reality of modern day slavery is indeed heart-breaking, even shocking. We learned that approximately 27 million people are enslaved today, generating $32 billion annually (second most lucrative crime activity). The majority of female victims are in the sex industry. An estimated two children are sold every minute. We heard a story of the torturous life of a girl caught in sex slavery, left disfigured from beatings.

This Valentine’s day, attendees also learned how Love146 was started in response to this tragedy. The founders went overseas on an investigative trip to a brothel, to see trafficking first hand. Young girls were contained behind a window, despondently waiting for their next customer, each girl reduced to a number on a menu. However, one girl stood out to them. One girl stared intently back, she still had fight left in her. That girl was #146, and from that encounter the organization was born.

While these stories left many in tears, there was also reason for hope. This fund-raising effort had a substantial impact. In fact the final presentation of the night showed the restoration homes that has been built to protect and nurture children that have been rescued out of slavery.

But surely this doesn’t happen here in America, right? Sadly, about 100,000 US children are forcefully engaged in prostitution or pornography, the average age a female enters prostitution is 12-14. And even here in Utah, there are have been hundreds of cases of trafficking, and some of the highest pornography usage in the US, fueling demand.

Dinner attendees aren’t the only modern abolitionists, you too can be a part of this movement, and make a important impact even with a small amount of time and money. Again, international exchange rates mean donations to groups like Love146 go a long way to tangible prevention and rescue efforts. You can join the local effort in Utah by supporting the local organization Operation61 (they host an annual race in Liberty Park called STOP TRAFFIC). Even without money, you can make a difference. Learn more. Make others aware. Call your US senators Mike Lee and Orrin Hatch, and your representatives to urge them to keep US supply chains slavery-free, use diplomatic efforts to protect victims, and fund the Office to Monitor and Combat Trafficking. This office is a tiny expense (we spend roughly 333 times as much to fight drugs as we do to fight trafficking), and non-partisan issues, so even moderate pressure can have an tremendous impact.

I was proud to be a part of this evening, and I wanted to commend the generous sponsors that made the evening possible, including SitePen, GSBS Architects, PurCo, SDI, USANA, the Point Christian Church, Chick-fil-A and caterers, Good Day Catering. I am also grateful for being part of a faith community, Sandy Ridge Community Church, that supports putting faith into action, pursing the Biblical call to fight injustices.

Sometimes a broken heart on Valentine’s day is a good thing!

Advertisements

Next Mobile Features

I wanted to suggest a few features that I believe could be significant in the mobile (web) space:
* Adding “force” and/or “radius” support to touch events. I believe this was already in the W3C specification at one point. Being able to detect the intensity or force of a touch event opens up entirely new possibilities of touch interaction. Developers could explore many new paradigms for user interaction with this extra information. Let’s not abandon this.
* An web/JavaScript interface for triggering and controlling haptic feedback. Android (and maybe others?) have made some effort to provide haptic feedback in the form of vibrations in response to button/key presses (and have native APIs for this). However, there is a lot of improvements and innovations that could be made if (web) applications could trigger the feedback themselves, providing their own possibilities for feedback based on application specific conditions. I would also love to be able to dictate something more natural and brief than an vibration that lasts for a couple hundred milliseconds. A single oscillation might be more appropriate for button or keyboard press, and longer vibrations might be more appropriate for other actions.
* Touch hover events. Yes, hovering my finger over the touch screen should trigger hover events. I know you are thinking that a touch screen that can’t detect your finger until you touch it. But come on, capacitance can be detected in objects without actual contact by simply moving to a sufficiently high frequency. Inductance at distance is a pretty basic electrical principle. Surely a touch screen could be engineered to detect finger hovering.
* Apps should be able to register as a MIME type handler with the browser so the browser can negotiate MIME types and trigger an application directly in response to links that an application can handle. For example, my Twitter application should register application/vnd.tweet. The browser should then add application/vnd.tweet to the Accept header, and if a server returns an application/vnd.tweet response (as twitter.com should do if application/vnd.tweet is in the Accept header), it should be handled by my Twitter application.
* Detachable/replaceable camera lens (not actually about web technology, just mobile devices). I want to be able to detach my phone’s camera lens and attach a real camera lens. I don’t need an SLR level lens, just one of the low-end lenses like on a typical $100-$200 camera. The 8PM CCD one my phone is more than capable of capturing a good image if it had a decent lens on it, something bigger than you can fit on a phone. I just want to be able to carry an replacement lens if I am know I am going to take picture, and be able to snap it on as needed. It is fine if I have to manually focus it.

Imperative, Functional, and Declarative Programming

November 8th, 2008
Much of the efforts in modern programming language evolution are focused on moving beyond the low level programming paradigm of imperative programming and into the realm of functional and declarative programming. Imperative programming is simplest way to interact with hardware and generally forms the foundation of higher level constructs. However imperative programming is highly error-prone approach to programming and can obstruct optimizations and implementation opportunities that better abstractions provide.

Functional programming focuses on computations and aims to minimize mutating states. Functional programming avoids side-effects and changing data as the basis for operations. Many computations that might rely on changing variables and state information in imperative programming, can be described in terms of mathematical computations in functional programming. Functional programming may rely on imperative steps in the implementations, but these imperative actions are ideally abstracted. For example, lets take a simple loop that finds all the objects in an array with a price property under 10. In imperative programming, this would be:

function bestPrices(inputArray){
var outputArray = [];
for(var i = 0; i < inputArray.length; i++){
if(inputArray[i].price<10){
outputArray.push(inputArray[i]);
}
}
return outputArray;
}
Can you spot the state mutations that we had to induce in this imperative approach? For each loop we had to modify i (increment it) and for each match we had to modify outputArray (push appends a value to it).

Now if we use a more functional approach:

function bestPrices(inputArray){
return Array.filter(inputArray, function(value){
return value.price < 10;
}
}
This code is completely free of side-effects. We simply described the operation in the form of expressions. The underlying implementation hides all the imperative actions necessary to make this work.

Declarative programming is defined as programming in terms of "what" instead of "how". Functional programming can be categorized as a form of declarative programming since we describe "what" computation should take place with expressions, instead of describing how the computation should be performed in a series of steps as in imperative programming. However, the declarative programming concept goes beyond just functional programming.

One of the central aspects of most applications is persisting data. A core service that most applications provide to users is to be able to store information relating to the use of the application. A social networking application is useless if it can’t actually remember anything about you and your friends. The functional paradigm is limited here, it’s methodology stands in complete opposition to the central goal of the application. Quite simply, applications must deal with mutating states and data. Here we can still apply declarative programming concepts, describing what the data is, instead of how it was created or how we modify it. Perhaps the most critical and invaluable benefit of declarative programming is that the representation of "what" the object implicitly dictates"how" the object is modified . With a declarative approach there is no need for describing how we interact with and modify data, because the structure of the data reveals the how without any further instruction.

JSON provides a great example of declarative programming. JavaScript supports numerous imperative techniques, but JSON is a declarative subset, which precisely defines what an object’s state should be without directions on how to get it there. For example, we could use imperative techniques:

var movie = new Object();
movie.title="Transsiberan";
movie["rat" + "ing"] = 1+2;
movie.setHowLong = function(len){
this.length = len.trim();
}
movie.setHowLong("2 hours ");
This representation describes how to build the state of the object, and can be of unbounded complexity, but it is not actually a representation of the final state. Alternately with JSON:

{"title":Transsineran",
"rating":3,
"length:"2 hours"}
The application of the concept of representation implicitly defining the method of modification can be seen here in the simple JavaScript object. The object’s state is observable as a set of properties and we can immediately infer from the presence of these properties how we modify the objects. We can simply set one of the existing properties to modify the state, and the effect of setting a property is obvious. Conversely, when an imperative approach where opaque object methods are used to modify an object’s state, the effects are not inferrable, one has to refer to out-of-band documentation to understand the interaction necessary to cause a state change.

The other central concept of a declarative approach to data is reversibility . If we modify the state of this movie object, there is a clear unambiguous easily computable representation for the new state of the object with a declarative definition. However, this is not true with the imperative definition. With the unbounded complexity possibilities of the imperative representation, it is impossible to always know exactly when step in the directions needs to modified to correspond the state change.

Another example of declarative programming is HTML. HTML represents the state of the layout of the page. The steps for how to get into a particular layout are not described, the layout is a state that is represented by the HTML. HTML provides clear reversibility as well. Layout changes can be clearly and directly mapped to changes in the HTML representation.

Applications themselves are a form of state as well, both at the low level (code is occupies memory in the form of binary data) and at the high level in languages that reify code. JavaScript is an example of an language where code is data. Here as well, we can benefit from code being organized in declarative approach rather than an imperative approach. An application runtime state that is the result of confusing imperative steps is much more difficult to debug than application that has a clear declarative organization.

Furthermore, declarative programming can often allow programmers to work within a "live" interactive environment, where code can be changed on the fly without requiring a restart. Imperative programming often means that the only way to predictably see the effect of a change in imperative code is to restart. We certainly want to be able to move towards faster development techniques that aren’t stuck in windows-style "restart" cycles.

Once again, declarative programming can be built on imperative programming. If strict discipline of rules is applied to imperative steps, the consistency can yield an declarative form in the context of rules. For example, if we said that all class declarations must take the form assigning a constructor function to a variable and then assigning an JSON object to the constructor’s prototype, then within the context of these rules, we can have a reversible representation that still be used even if we change the state of the class (changing method implementations for example).

It is important that these principles be used to direct our continuing evolution of JavaScript. Techniques like pseudo-classes based on the side-effects within constructor function execution are a regression in the evolution away from imperative to declarative. Prototype-based inheritance does a great job of maintaining declarative programming, as the application itself can be continually updated in interactive code. Debugging can be "live", classes can be evolved without requiring environmental restarts.

JavaScript Expressions – Beyond JSON

September 22nd, 2008
JSON a powerful simple expressive format for data structures with a high level of interoperability; implementations exist in virtually every popular language. However, there are certainly situations where developers often want additional constructs for effectively representing data that are not natively supported in JSON. Perhaps the most common usage of JSON is for the consumption of data in the browser. Most JavaScript libraries, parse JSON by using eval, and consequently are actually capable of full JavaScript expression evaluation, of which JSON is a subset. JavaScript expressions support a much wider range of constructs than pure JSON. Usually a simple JSON/JavaScript expression parser looks like:

function parseJson(jsonText){
return eval(‘(‘ + jsonText + ‘)’);
}
One of the most oft-desired data type that JSON doesn’t provide is a date type. Numerous creative, bizarre, weird, and silly techniques have been proposed for expressing dates in JavaScript. These methods often require extra parsing or walking strategies. Douglas Crockford’s reference library for JavaScript JSON serialization, serializes dates to strings by ISO format. I have written about deserializing these ISO dates. But, on deserialization it is not possible to determine if a value is actually a string or is really intended to be an actual date object. However, if the recipient of JSON is known to support full JavaScript evaluation (like the browser with a library using an eval), the solution for delivering date type value is simple, we can just use the normal JavaScript constructor:

{“mydate”:new Date(1222057313264)}
There are also several numeric entities that JavaScript provides that are not available, including Infinite, -Infinite, and NaN. I am not sure why you would need it, but undefined can also be transferred with JavaScript. Functions can be included as well:

{
“reallyBig”: Infinite,
“parsedString”: NaN,
“whatValue?”: undefined,
“doSomething”:function(arg){return doSomethingElse{arg};}
}
The data representations thus demonstrated are not JSON, they are JavaScript using the same object/array literal that JSON is based on. However, certainly one of the biggest benefits of JSON is it’s interoperability. If your data is going to be consumed by more than just JavaScript-parsing JavaScript libraries, you must make your data available in pure JSON format as well. This is well-handled through content negotiation. If you are using JavaScript expressions to transfer data, you should make sure your requests from the browser actually are specifying they can handle JavaScript:

Accept: text/javascript
Your server should be prepared to handle requests from clients that indicate that they only understand pure JSON:

Accept: application/json
Persevere, the JavaScript/JSON application and storage server, also provides support for parsing and storing extra constructs including non-finite numbers (NaN, Infinite), functions. Persevere can output the data as JSON or JavaScript (expression).

JSON is certainly powerful, expressive data format. This post is by no means an attempt to expand or lobby for the modification JSON. However, when it is known that consumers are actually JavaScript capable clients, it can often be advantageous to use the full power of JavaScript to represent data, while still providing JSON as representation for non-JavaScript capable clients.

The Next Great Protocol: HTTP

February 12th, 2008
I suppose this post would be more prophetic a decade or two ago. It was in the 90’s that the HTTP protocol really became the Great protocol. It is foundation of the World Wide Web, and is language on which browsers were able to really open the doorway to the Internet for us. So am I little behind the times to suggest that HTTP now has an emerging future relevance? Is HTTP a relic of the past or does it have something to contribute to the future?

One of the distinctives of current Internet technological advance is in the growing realm of open sharing and utilizing data from disparate sources. Facilitating this progress is one of the principle goals of my work and this site. I want the Open Web to be more than just a bunch of pages that are developed without proprietary constraints, but for the Open Web to be the environment for open flow of information with intelligible interconnection of data that can give participants unprecedented leverage and permutations of capabilities. Mashups are a buzzword to describe this process. However, in order for information to flow rapidly, there must be commonly understood communication. JSON has enormous potential because it is so simple, expressive, and pervasive that it forms a excellent syntax for expressing data. However, JSON is not a transport. Two agents that wish to dialogue may understand JSON data, but they still need a mechanism to communicate and transfer that information. HTTP is almost ubiquitously the right choice for the transport. The incredible adoption of HTTP is main reason for this. No transport is more widely understood

What is wrong with HTTP?

Before going into the benefits of HTTP, let us look at the problems with the HTTP, or more specifically, THE problem with HTTP. The most fundamental problem with HTTP is that it requires that every single response must be preceding by one corresponding request. The specification describes HTTP as request/response protocol. This constraint has an enormous impact on the capabilities of HTTP. The first major impact is in hindering performance optimizations. In order to load a web page, every resource must be requested before the server can send the resources. This creates a signficant latency problems. There can be large gaps in transfers while servers are waiting to receive requests. However, a server could easily determine the most likely resources that a user agent will need and send them before the request if this constraint was not in place.

This constraint is also the fundamental cause of consternation with Comet development. Comet consists of efforts to allow servers to send messages to clients asynchronously instead of in immediate response to a request. Doing this requires creating an outstanding HTTP request that the server can respond to when it wants to. Comet push capabilities could easily be achieved if servers could simply send messages to the client without requiring a preceding request.

Throwing away the entire protocol because of a single issue is absurd, rather let’s fix or enhance the protocol. In recent articles I have discussed how non-request/response-bound HTTP messages can be sent within HTTP messages in order to deal with this problem using existing infrastructure.

So why is HTTP the right choice for future information transport?

It is so pervasive – HTTP is everywhere. It is how all browsers communicate with web servers. HTTP is understood by an overwhelming amount of software. Attempts to reinvent the functionality and capabilities of HTTP is essentially asking for this broadly understood language to be ignored in lieu of new one. Without very significant advantageous to new semantics and vocabulary, such attempts are generally either doomed to obscurity or worse, a cause in division in multiple semantics for the same thing, causing increased code complexity and costs.
It has the constructs for tomorrow – With ever increasing interchange of data in the future, more sophisticated and robust techniques for communicating data are needed. Many of these techniques already exist in HTTP, but have simply not yet been needed with yesterday’s technology. The future of high performance, intelligent data interchange will hinge on capabilities that already exist in HTTP including:
Content negotiation – HTTP includes vocabulary for negotiating between different formats.
Partial data transfers – HTTP has an extensible mechanism for sending a range of information.
Robust error handling – HTTP includes a comprehensive set of errors.
Parallel scalability – Perhaps one of the most impressive features of HTTP is how carefully designed such that demand can be easily scaled across numerous machines as HTTP proxy servers.
REST/CRUD semantics – HTTP provides semantics for basic create, read, update, and delete operations.
Performance improvements – HTTP pipelining has only begun to be utilized (only Opera has it turned on by default). Substantial performance improvements can be realized through pipelining.
Emerging technologies give us new leverage with HTTP – With traditional web application development, much of the workings of HTTP were hidden away by the browser and the server. However, the Ajax revolution gives developers far greater control of HTTP messaging. Most Ajax developers have simply used XMLHttpRequest as a means for communicating simple payloads of data back and forth to server, but XHR has given developers new access to the HTTP capabilities through their header metadata, and leverage to utilize the full semantics of HTTP for more meaningful communication.
Furthermore, with new XHR capabilities coming soon (FF3 will have cross site XHR support), XHR communication will involve much more than simply communicating with your own server. Communicating with your own server does not necessarily require widely understood vocabulary, but when communicating with other servers, the cost and efficiency of integration will be directly related to how much shared vocabulary can be utilized to provide jointly understood communication. New ways to utilize, extend, and leverage HTTP are being developed as well like the Atom Publishing Protocol.
Does it really matter what is on the wire? Can’t we simply distribute API based communication handlers? Consider this, is it easier to setup a TV to tune into the available TV stations, or is it easier to setup a printer to work with your new operating system? TV stations have standardized on a single format for broadcasting content. Connecting a TV to a station is as simple as turning it on choosing your station. On other hand, printers have no standardized communication with servers. Devices like printers use API based communication handlers (AKA drivers). With a huge number of different protocols for each printer, and different operating systems to interact with, there are an enormous amount of permutations of different drivers that must be developed, very prone to incompatibilities. While the situation has improved, many have experienced the frustrating effort that can go into trying to find the right driver for your OS/printer combination.

But in the realm of browser communication does it matter since we are all using JavaScript? Absolutely. There may be a single language on the browser, but there are different JavaScript libraries, with may prefer different APIs. In addition the browser environment can be very bandwidth constrained. Requiring another library for each data endpoint that you want to connect to, does not scale well. And as we move towards more service oriented architectures, browsers will not be the only consumers of information. Many other clients must be considered as well.

We need to be examining the HTTP specifications and learning how to leverage the power that is available, to maximize our potential in coming world of extensive data interchange and mashups. Next time you need to create Ajax communication to trigger CRUD operations, consider using the RESTful HTTP methods (PUT, POST, and DELETE). Do you need to deliver multiple formats of the resource? Consider using HTTP content negotiation. Do you want Comet capabilities in your application? Consider using an HTTP standards based approach. The more we can utilize what is there, the more widely we can be understood, and the more efficiently we can utilize the infrastructure of the web. The full utilization of HTTP can provide a solid foundation for the future of data interchange.

JSONP Header Transfer Proposal

December 24th, 2007
JSONP is a proposal for performing cross site JSON data interchange by using script tag insertion with a callback to deliver the data payload. JSONP can be used in place of XmlHttpRequest in cross site situations. However, when using XmlHttpRequest, there are situations where the JSON data in content body is not the only relevant information, but the response headers, as well other aspects of the HTTP request and response may also contain important information. I propose that JSONP can extended in a small and simple manner to accommodate the transfer of header information in suitably interoperable manner.

I propose when custom headers need to be included in the request, that headers simply be included as parameters. When servers that are responding to JSONP requests want to include response header information in the response, they may optionally respond with a second argument that contains header information in the form of an JSON object/map. The object should contain properties with the names corresponding to header names, and values corresponding to header values. The request format can remain the same:

http://myserver/requestJSON?callback=mycallback&Accept=application%2Fjson
And the response would look like:

mycallback({“name”:”foo”},{“Content-Type”:”application/json”,”Expires”:”Mon, 24 Dec 2007 17:09:04 GMT”})
This addition to JSONP should generally not break existing JSONP clients, as normally only one parameters is used. This allows JSONP to emulate more semantically correct manners of transferring information that conceptually belongs in headers.

Usages could include transferring the “Last-Modified” header, so clients could determine the freshness of the data, sending If-Modified-Since header in requests and many more.

In addition to transferring request and response headers, I propose that additional HTTP information could also be transferred in pseudo headers. The following pseudo headers may be used:

request
httpMethod – Indicates the HTTP method used (GET, POST, PUT, DELETE, etc).
httpContent – Indicates the body of the content of the HTTP request (such as the POST body)
httpNoCache – With non-idempotent methods, you should generally include an additional parameter with a random unique value to ensure that the request is not cached.
response
httpStatusCode – Indicates the HTTP status code.
httpStatusText – Indicates the HTTP status text.
A more sophisticated example request could be:

http://myserver/requestJSON?httpMethod=POST&httpNoCache=23n9zs92l&httpContent={“name”:”bar”}&callback=call1
And the response could be:

call1([{“name”:”foo”},{“name”:”bar”}],{”httpStatusCode”:200,”httpStatusText”:”OK”,”Date”:”Mon, 24 Dec 2007 17:09:04 GMT”})
Of course, this enables a greater level of interaction, and therefore the prerequisite security warnings must be heeded. You can get yourself in a lot of trouble with XSS if you are not using proper explicit authentication schemes.

CrossSafe

This weekend I released CrossSafe. Briefly, CrossSafe provides secure cross domain JSON requests and partially implements the JSONRequest specification (the get and cancel methods). Ajaxian covered the release and you can also read more about at the project page, see a demo, or download it. Rather than repeat the project description here, I thought it would be more interesting to describe the approach and elicit feedback on the future of secure cross site request.

CrossSafe uses nested iframes with a different domain than the parent window to setup a secure channel of communication with cross site servers. This works by passing a JavaScript object to a child frame and then using an alternate host name (like webservice.json.com) to prevent cookie access and changing the document.domain to prevent window, DOM and other JS environment access. By bringing this together, the dynamic script tag/cross site scripting approach can be used to retrieve cross site JSON data, and the scripts that are loaded from the other site are sandboxed. The parent window can retrieve data from these scripts, but the scripts can not access the parent window. This approach is also described here, but as far as I know, this is the first implementation. This implementation also follows the JSONRequest API specification, which allows you to use a standard API, and the library defers to a native implementation when it becomes available.

There is another approach for accessing cross site data securely that uses iframe proxies called fragment identifier messaging (FIM). Dojo has a good implementation of this approach. However, I believe this approach suffers from a couple of problems. First, it requires a level of server cooperation that has not been widely implemented yet. Servers must have Dojo’s iframe proxy script available on their site. On the otherhand, CrossSafe requires that server implement callback parameters which is already available with web services from Yahoo, Flickr, Codinginparadise.org’s transclusions, and JSPON. FIM also relies on polling to transfer data. Second, I have not done any tests to verify this, but I would be inclined to believe this is a slower approach as well. However, that said, I am interested in possibly implementing the JSONRequest.post method using this approach. The JSONP/XSS technique is only capable of making GET requests. The FIM approach on the otherhand does support POST. If I integrated Dojo’s FIM implementation in CrossSafe, all three JSONRequest methods could be available, and it could just be recommended to use the get method whenever possible because of the performance and interopability advantages.

Another issue with these approaches is that there are no real standards about how to do these requests since they require server cooperation. With the JSONP/XSS callback approach has seen various callback parameter names used including jsonp, callback, and jsoncallback. CrossSafe supports changing the parameter name, but it would be great if we could standardize this. Despite the fact that the original JSONP article proposed jsonp, I propose that we use the parameter name of callback. This is very succint and clear, and let’s face it, Yahoo is the most significant provider for JSON out there, and this is the parameter name they use.

The FIM approach could also use standardization. Dojo has the best implementation that I know of, but I believe OpenAjax.org is currently working on standardizing this as we speak. At least I hope…

Let me know if you have any thoughts.