Web Socket is a new protocol, standardised in RFC 6455, attempting to bring the best features of HTTP & TCP together. More specifically, it aims at being Connected & full-duplex (as TCP), allowing server to call-back clients and universal (as HTTP).
This wasn’t done without pain. Web Socket has a non-trivial handshake process, done over HTTP, after which the underlying TCP connection is used. The handshake involves exchanging a key. The key exchange ensures Server & Client are aware of the Web Socket Protocol. It also makes the protocol much harder to implement in your beloved garage.
Web Socket provides a packet-framing Data Transfer protocol after the exchange. This allows both servers & clients to exchange data in a predefined way with low-overhead.
Web Socket is aiming at replacing long pooling in rich web sites, thus allowing Servers to update client in real time in an efficient manner.
I’m a little sceptical about that protocol. Beside the Byzantine aspect of handshake (designed to fool intermediary to believe Web Socket is just HTTP, hence not requiring any Routers on Earth to be updated for the protocol to work), I question its scalability. We need servers to keep TCP connection with clients. What happen if the server fails? How do we load-balance? The beauty of the HTTP protocol is that it was connection-less hence client-server interaction were bound in time which allows us to make a lot of things, for instance, load balancing requests on a web server farm. With Web Socket, it seems that we become stateful.
What we really need to fix the long pooling problem is a way for a server to establish a connection with a client. Because this is impossible in most cases due to firewall rules, we come up with a complicated mechanism approximating the behaviour we want and bringing challenges with it.
This is another instance of the sclerosis of Internet: the inability for the Internet to fundamentally evolve beyond its 1995 design. IPv6 anyone?
Ok, this rant aside, Web Socket is a well thought-through protocol supported by both Windows 8 & IIS. It works.
I’m always amazed at you a simple user interface can simplify complex tasks.
Take Windows Explorer where you can drag & drop multiple files from one folder to another. The user sees which folders the files are going into, it takes a few seconds and boom! Doing that by command line would be much more abstract, left alone much more verbose.
Well, here’s an interesting article about web design and how the English government manage to migrate 750 web sites into one well designed gateway.
I wish we could have that in Canada. Last time I did consultancy for the Federal Government of Canada, we were still arguing about a single-sign-on solution adopted by less than %50 of the government sites. Each department have their own server, often their own Active Directory, that do not see each other. We are quite at a distance from a one-portal solution!
An update to Internet Explorer 9 Beta is available from Microsoft as of yesterday (November 23rd 2010).
This is an update to the full browser as opposed to the developer preview build which isn’t the full Internet Explore, although the preview build does work side-by-side with any other version of IE.
Not much is mentioned about what the update brings. Rumours circulate that a beta 2 would see the day before release candidates. Stay tuned.
A quick note about Microsoft’s contributions to jQuery (namely XYZ): they are now official plug-ins!
Those plug-ins bring a programming model relatively close to Microsoft WPF (or Silverlight) but quite in line with jQuery. It also shows the commitment of Microsoft to jQuery.