Microsoft is going to deploy a new simplified user experience to login in its different services.
You can opt-in to this new UX by going here. You will have to opt-in every week though.
The video is a nice and comprehensive introduction to Windows 8 for everyone (i.e. not geek only).
My experience with Windows 8 is that once you’ve figured out a few things (e.g. how to activate contextual Search), you can start appreciating the product. Before that, it just looks and feels weird and annoying.
I’ve learned the ropes by myself and through all the blogs I’m reading. But something tells me my wife won’t have that patience. I’ll try that video on her!
Web Socket is a new protocol, standardised in RFC 6455, attempting to bring the best features of HTTP & TCP together. More specifically, it aims at being Connected & full-duplex (as TCP), allowing server to call-back clients and universal (as HTTP).
This wasn’t done without pain. Web Socket has a non-trivial handshake process, done over HTTP, after which the underlying TCP connection is used. The handshake involves exchanging a key. The key exchange ensures Server & Client are aware of the Web Socket Protocol. It also makes the protocol much harder to implement in your beloved garage.
Web Socket provides a packet-framing Data Transfer protocol after the exchange. This allows both servers & clients to exchange data in a predefined way with low-overhead.
Web Socket is aiming at replacing long pooling in rich web sites, thus allowing Servers to update client in real time in an efficient manner.
I’m a little sceptical about that protocol. Beside the Byzantine aspect of handshake (designed to fool intermediary to believe Web Socket is just HTTP, hence not requiring any Routers on Earth to be updated for the protocol to work), I question its scalability. We need servers to keep TCP connection with clients. What happen if the server fails? How do we load-balance? The beauty of the HTTP protocol is that it was connection-less hence client-server interaction were bound in time which allows us to make a lot of things, for instance, load balancing requests on a web server farm. With Web Socket, it seems that we become stateful.
What we really need to fix the long pooling problem is a way for a server to establish a connection with a client. Because this is impossible in most cases due to firewall rules, we come up with a complicated mechanism approximating the behaviour we want and bringing challenges with it.
This is another instance of the sclerosis of Internet: the inability for the Internet to fundamentally evolve beyond its 1995 design. IPv6 anyone?
Ok, this rant aside, Web Socket is a well thought-through protocol supported by both Windows 8 & IIS. It works.
Agile is more than 10 years old but still have whimsical attributes of a brand new artefact. I believe this isn’t unique to agile but tend to be the case for any delivery methodology. It seems that however how long a methodology has been around, only a limited set of characteristics is remembered, barely understood and cited ad nauseam.
PMI? Large project plan, rigid schedule.
Agile? Stand up meetings, open bar (no scope keeping).
RUP? Lots of deliverables, configurable.
Now Paul’s article dived beneath the surface, exploring different ways to use an agile methodology depending on what we want to optimize. His analogy with athletes, body type and training is quite good: e.g. a sprinter doesn’t train the same way as an ultra-marathoner, although both might be world-class athlete. Also, trade-offs are part of success recipes: no athlete can excel in both long & short distance.
This graph of his is especially good at expressing the idea of trade-offs in delivery:
What do you need to optimize in your project?
I encourage you to read Paul’s article.
They finally did it: the future release of Entity Framework (version 6) will sport asynchronous behaviour based on .NET 4.0 Task Parallel Library (TPL).
The API is pretty neat. First the SaveChanges gets an async brother SaveChangesAsync returning a Task. So we can now write things like:
The more complicated topic is the queries. LINQ was designed before TPL and doesn’t have the notion of asynchrony. They got around it in a clever fashion: LINQ describe the queries while EF allows you to enumerate the result in an asynchronous fashion:
var q = from e in context.Employees
await q.ForEachAsync(e => Console.WriteLine(e.FirstName));
So the entire enumeration is done asynchronously hence Entity Framework can manage the moment when it needs to fetch the DB for new objects.
This new feature is quite powerful since DB access is typically a place where your thread blocks, waiting for something external. For instance, a web service doing a query and returning data is typically written synchronously with the thread blocking waiting for the DB server. Using this new asynchronous mode, we can as easily write an asynchronous version, much more scalable since no threads are blocking, hence more thread can be used to process requests.
Doing automated unit tests in SharePoint isn’t easy.
As with all libraries that haven’t been designed with unit testing in mind, SharePoint object model doesn’t expose its dependencies: it connects to a Content Database given the context creating it and there are no ways to redirect it to some stub implementations.
That is unless you can override method invocations. This is what Visual Studio fakes do.
Fakes allow a developer to create a stub out of a real object by rerouting calls to properties or methods.
For SharePoint, Microsoft just released SharePoint Emulators, a system of Fakes based shims implementing the basic behaviours of the SharePoint 2010 server object model.
Developers can now use those shims to write unit tests on code using the SharePoint 2010 server object model.
Three weeks ago Forrester released a paper on Cloud Databases.
As pointed out by Microsoft, Forrester declared SQL Azure and Amazon Relational Database Service (RDS) and Amazon DynamoDB and salesforce.com’s Database.com as leader of the pack.
That is quite impressive given the relatively late start Microsoft took on those competitors.
SQL Azure is a leading service in the Azure family. As an Azure observer, for me there was a before and an after SQL Azure. Before, Azure was a quaint initiative with potential but on the fringe with no mass adoption. When SQL Azure arrived, people got on board. The economics was good, the management and performance made it a no brainer and it is actually quite easy to know when it makes sense to use it or not!
With the addition of SQL Azure Federation, enabling a managed sharding of tables, SQL Azure outgrew its scale up limitation and enables truly huge scale such as consumer scale scenarios.
Slowly the entire SQL ecosystem is moving to Azure. Reporting Services have been there for more than a year and SSIS is suppose to come into town soon. We’ve heard rumours about Analytic Services although this seem to be more for the long run.
I’m always amazed at you a simple user interface can simplify complex tasks.
Take Windows Explorer where you can drag & drop multiple files from one folder to another. The user sees which folders the files are going into, it takes a few seconds and boom! Doing that by command line would be much more abstract, left alone much more verbose.
Well, here’s an interesting article about web design and how the English government manage to migrate 750 web sites into one well designed gateway.
I wish we could have that in Canada. Last time I did consultancy for the Federal Government of Canada, we were still arguing about a single-sign-on solution adopted by less than %50 of the government sites. Each department have their own server, often their own Active Directory, that do not see each other. We are quite at a distance from a one-portal solution!
Service Bus is a newly released technology by Microsoft (October 24th 2012). It aims at being an on-premise equivalent to its Azure Service counter part. It implements a subset of its cloud counter part (install notes).
Windows Server Service Bus (WSSB) is part of Windows Server licensing and is therefore ‘free’.
The product comes with a configuration console:
The differences between the cloud and on-premise versions of Service Bus are explained here. None seem to be related to a limitation of the on-premise product but are instead related the two different environments, e.g. addressing scheme is fixed in Azure while it contains any domain name on-premise.
Install instructions are here.
Windows Server Service Bus 1.0 is supporting Queues, Topics & Subscription. It therefore supports straight asynchronous publishing (via queues) and publish-subscribe asynchronous publishing (through topics & subscriptions). There are no guarantees of in order delivery although order is mostly respected in queues.
This technology is similar to cookies in browser and advertisers to know which iPhone user has looked at which ads on which web site. It is anonymous in the sense that no other information beyond your ID is passed to advertisers. It allows different advertisers to do target advertisements, i.e. giving your browsing history, serving you a targeted ad.
Tracking is on by default and the setting to turn it off isn’t in the Privacy settings but buried in the General ones, which is a little ironic.
In comparison, a few weeks ago zdnet reported that the decision of Microsoft to turn off tracking by default and how advertisers reacted, some cranking the crazy talk to unprecedented level (the call to defending democracy had a kitchen sink to it).
It is obvious that tracking technology, supporting target advertising, has become key in the business and we’re certainly not done earing about it!