It first goes into the motivations for building a new Phone OS and what vision they had for it. Then it goes into the different development features and how it connects to the vision.
For instance, one of the objectives was to have the phone being customizable. One of the main way to do that is to install applications & games. Now, for this to really work, the application model needs to be predictable so that a given game wouldn’t degrade the entire experience. This is why they went with a managed application model, ie .NET (via Silverlight and XNA): applications are verified prior to execution.
Very nice introduction to a promising product.
A specific comment from Balmer did resonate for me:
Because the technology actually is very general purpose, and we’ll see come into the rest of our lives pretty soon. It’s a little camera and microphone that sits on top of your TV set. And if you want to control the TV, you don’t go get some remote control or big fat gaming thing…
Now, if you’re like me and have been following recent development of natural user interfaces, you’re quite optimistic about the future of those new technologies. For instance, watch how Johnny Lee was about to hack a wii remote control (basically, using its infra-red sensor) to create a Low-Cost Multi-touch Whiteboard and a Head Tracking for Desktop Virtual Reality Displays. Those are inexpensive gadgets opening up an whole new set of scenarios.
Multi-touch has been massively popularized by the iPhone. But multi-touch has its limitations. For instance, have a look at a demo of Microsoft Courier, an upcoming booklet PC from Microsoft. Now this is an early demo, but as groovy as it looks, I can’t help but to think it showcases the limits of touching a screen to communicate intention to a machine. Scrolling is natural, writing is to. But what about closing the current window, coming back to the main menu, etc. ? Is it going to be weird gesture you have to learn? Maybe I’m worrying too much and maybe Windows gesture weren’t so obvious when they were introduced either, but I also think that we need more than multi-touch. For starter, I don’t want my screen to become a finger juice sponge
I don’t know if project Natal itself is going to be the X-Mas hit Steve Balmer would like it to be, but for me it’s a definite step in the direction of more natural user interfaces. It’s a bold move: it’s not introducing a new pen, mouse or remote control, it’s using your body as the interface!
So Minority Report – type user interfaces might not be so far after all!
I’ve decided to move my technology related blogs over here. This way I’ll separate the personal stuff versus technological.
I’ll try to blog more often on those subjects from now on
Microsoft has released a beta version of Microsoft Docs, a tailored version of Office Web Apps for Facebook users.
Any similarity with a package of the same name from Google is pure coincidence
Microsoft has moved forward with their REST data-access strategy. They’ve introduced the Open Data Protocol (oData). Formally known as Astoria, OData is a web protocol for accessing data using different formats, including AtomPub & JSON.
OData is supported in .NET 3.5 SP1. At first glance, it looks like a fancy way to access data, allowing a user to encode simple data queries in an HTTP URL query string and receiving data in the form of a feed. But with OData now being a standard protocol (ok, it’s a Microsoft standard, but it at least means Microsoft is putting its weight behind it, which is no small things), its usage is going to get widespread.
Now what does OData opens? For me, it really opens data-access through a business logic layer: application tier as opposed to data tier. A big limitation of SQL queries is that, hosted in Microsoft SQL Server (or other relational database server technology), it means your query has to look at tables or views. Having the queries running on the application tier opens up the visibility of what your query can look at.
Using .NET 3.5 SP1, a team I was leading implemented a little entity set accessible using OData. The entity set represented various state of an integration server (e.g. number of batch running, how many queued, etc.). What was interesting about it is that the data exposed there didn’t exist in any database. It was real time application server statistics.
You could think about different scenarios, exposing data from more than one database and an external service and what have you. That is just SOA, right? Well, with the amount of clients (consumers) of OData increasing, it’s now becoming a well understood web service. Think about a user bringing your data in Excel directly for instance.
Currently, the number of consumers is limited, but Microsoft has plan to get all its product lines consuming OData.
To me, that gives us a strong incentive to expose data using OData.
I just read a nice article from Scott Gu’s Blog: Building a Windows Phone 7 Twitter Application using Silverlight.
Scott builds two samples mini-apps (an Hello World and a simple Twitter Client). The integration with Visual Studio Designer seems to be quite good, you even see the phone while you’re designing! You still have the entire .NET to harness, including WCF, Silverlight, events, multi-threading, etc. .
This definitely beats iPhone development any day!
After toying with Microsoft Virtual Labs (http://msdn.microsoft.com/fr-fr/aa570323(en-us).aspx) on Workflow Foundation (WF) in .NET Framework 4.0, I must say I’m pretty impressed.
Coupled with Windows Server AppFabric (formally known as Dublin Server), it’s basically BizTalk Server without all the schemas and adapters coming out-of-the-box with BizTalk. Otherwise, it’s as powerful and actually much easier to use.
The integration of WCF & WF, so half-ass in .NET 3.5 SP1, is quite well done. You create a Workflow, drop a few Send / Receive activities around and that creates an implicit WCF endpoint with an implicit WCF contract. Quite gorgeous!