Gartner on Windows Azure AppFabric: A Strategic Core of Microsoft’s Cloud Platform

Yesterday, I talked about Forrester report around SQL Azure.  Today, I’m gona talk about a Gartner Report on Windows Azure AppFabric.  The report can be read on:

http://www.gartner.com/technology/media-products/reprints/microsoft_mapp/vol2/article14/article14.html

Basically, the key elements:

  • They acknowledge the fact that cloud computing is a strategic investment for Microsoft, comparing it to the web in the 1990’s.
  • Microsoft is target the traditional three layers of cloud computing (IaaS, PaaS & SaaS) & intend to be a leader in all.
  • Windows Azure AppFabric has emerged as a core platform technology in Microsoft’s Cloud vision
  • Many teams are involved in Cloud Computing at Microsoft and as a result, the lack of synchronization between the different delivery creates some confusion

As for their recommendations:

  • Microsoft should deliver a competitive and complete cloud platform within 2 or 3 years.  Gartner estimates the current state of the platform is incomplete & unproven.  Early adopters should expect bumpy ride.
  • IT Planners should consider Microsoft as a candidate for cloud computing, but again in the short term, the offer isn’t complete.

The paper gives a good overview of the platform as it is today.  That alone gives this paper a lot of value, since this information is otherwise quite widespread on Microsoft sites.

I found their recommendation a bit harsh though.  Calling Windows Azure unproven is bit extreme given the amount of sites running it today and the fact that Microsoft is slowly moving their assets into it as well.

There is no Gartner curve or quadrant, so I’ll have to rely on my natural verbosity at the next cocktail party!

SQL Azure & ACID Transactions: back to 2001

I meant to write about this since I read about it a little back in July, today is the day.

image You know I love Microsoft SQL Azure.

The technology impressed me when it was released.  Until then Azure contained only Azure storage.  Azure Storage is a great technology if you plan to be the next e-Bay on the block and shove billions of transactions a day to your back-end.  If you’re interested into migrating your enterprise application or host your mom & pop transactional web site in the cloud, it’s both an overkill in terms of scalability and a complete paradigm shift.  The latter frustrated a lot of early adopter.  A few months later, Microsoft replied by releasing SQL Azure.  I was impressed.

Not only did they listen to feedback but they worked around the clock to release quite a nice product.  SQL Azure isn’t just an SQL Server hosted in the cloud.  It’s totally self managed.  SQL Azure hosts 3 copies of your data in redundancy, so it’s totally resilient to hardware failures and maintenance:  like the rest of Azure it’s build with failure in mind as being part of life and dealt with by the platform.  Also, SQL Azure is trivial to provision:  just log to Windows Azure portal and click new SQL Azure…

This enables a lot of very interesting scenarios.  For instance, if you need to stage data once a month and don’t have to capacity in-house, go for it, you’re gona pay only for the time the DB is on-line.  You can easily sync it with other SQL Azure DB and soon you’re gona be able to run reporting in the cloud with it.  It’s a one stop shop, you pay for use, you don’t need to buy a license for SQL Server nor for Windows Server running underneath.

Now that is all nice and you might think, let’s move everything there!  Ok, it’s currently limited to 50Gb which is a show stopper for some enterprise applications and certainly a lot of e-Commerce applications, but that leaves a lot of scenarios addressed by it.

A little caveat I wanted to talk to you about today is…  its lack of distributed transaction support.

Of course, that makes sense.  An SQL Azure DB is a virtual service.  You can imagine that bogging down those services with locks wouldn’t scale very well.  Plus, it’s not because two SQL Azure instances resides in your account that they reside on the same servers.  So supporting distributed transactions would lead to quite a few issues.

Now most of you are probably saying to themselves:  “who cares, I hate those MS-DTS transactions requiring an extra port to be open anyway and I never use it”.  Now you might not use that but you might have become accustomed to using .NET Framework (2.0 and above) class System.Transactions.TransactionScope.  This wonderful component allows you to write code with the following elegant pattern:

using(scope=new TransactionScope())
{

//  Do DB operations

scope.Commit();
}

This pattern allows you to declaratively manage your transactions, committing them and rolling back if an exception is thrown.

Now…  that isn’t supported in SQL Azure!  How come?  Well, yes you’ve been using it with SQL Server 2005 & 2008 without ever needing Microsoft Distributed Transaction Service (MS DTS) but maybe you didn’t notice it but you were actually using a feature introduced in SQL Server 2005:  upgradable transaction.  This allows SQL Server 2005 to start a transaction as a light transaction on one DB and if need be, with time, to upgrade it to a distributed transaction on more than one transactional resources (e.g. another SQL Server DB, an MSMQ queue or what have you).

When your server doesn’t support upgradable transactions (e.g. SQL Server 2000), when you use System.Transactions.TransactionScope, it opens a distributed transaction right away.

Well, SQL Azure doesn’t support upgradable transactions (presumably because they have nothing to upgrade to), so when your code will run, it will try to open a distributed transaction and will blow.

Microsoft recommendation?  Use light transaction and manage them manually using BeginTransaction and Commit & Rollback on the returned SqlTransaction object.  Hence the title:  back to 2001.

Now, it depends what you do.  If you’re like a vast majority of developers (and even some architect) and you think that ACID transactions is related to LSD, then you probably never manage transactions at all in your code, so this news won’t affect you too much.  If you’re aware of transaction and like me embraced System.Transactions.TransactionScope and sprinkled it over your code like if it was paprika on an Hungarian dish, then you might think that migrating to SQL Azure will take a little longer than an afternoon.

Now it all varies.  If you wrapped your SQL Connection creation in a factory, you might be able to pull out something a little faster.

Anyhow, I found that feature quite disappointing.  A lot of people use SQL Server light transactions and that would be (I think) relatively easy to support.  The API could blow when you try to upgrade the transaction.  I suppose this would be a little complicated since it would require a new provider for SQL Azure.  This is what I proposed today at:

http://www.mygreatwindowsazureidea.com/forums/34192-windows-azure-feature-voting/suggestions/1256411-support-transactionscope-for-light-transaction

So please go and vote for it!

Forrester: SQL Azure Raises The Bar On Cloud Databases

image November 2nd 2010, Forester Research published a report around Microsoft SQL Azure.  The report can be found on Microsoft web site:

http://www.microsoft.com/presspass/itanalyst/docs/11-02-10SQLAzure.PDF

Basically, they interviewed 26 companies using the technology and concluded that:

  • SQL Azure is reliable
  • It delivers for small to medium scenarios
  • What seems to differentiate it from other cloud or DB vendors:
    • Multitenant architecture, which delivers better pricing
    • Easier to use

Currently the top size of SQL Azure is 50 Gb.  So «medium scenario» here might mean big or small for you depending where you are coming from.

Forrester positions Microsoft SQL Azure as a leader in their domain.  They do not have those fancy Gartner quadrant and curves that sits so well at cocktail party but it does deliver the goods:  SQL Azure rocks!

Now, just to show that I’m not only a zealot, I’m going to deliver a critic of one technical capability of SQL Azure in the next blog post ;)

Internet Explorer 9 – Beta Update

IE9An update to Internet Explorer 9 Beta is available from Microsoft as of yesterday (November 23rd 2010).

This is an update to the full browser as opposed to the developer preview build which isn’t the full Internet Explore, although the preview build does work side-by-side with any other version of IE.

Not much is mentioned about what the update brings.  Rumours circulate that a beta 2 would see the day before release candidates.  Stay tuned.

Sharing Data Contracts between clients & servers with WCF Data Services

I’ve been blogging a bit about the OData protocol put forward by Microsoft and even wrote an article about it on Code Project.  That article is supposed to be followed by others about WCF Data Services, the .NET implementation of OData, well…  stay tuned!

odata I love that OData protocol.  For me it finally brings forward the vision of web services replacing a database for data access and business logic.  What was missing from SOAP web services was the ability to query.  So yes, you could expose your data on the web, but you had to know in advance what type of query your clients will need.  If you were not sure, you would pretty much end up with the antics GetCustomerByID, GetCustomers, GetCustomersByContractID, GetCustomersByFirstName, SearchCustomerByName, GetCustomersWhoWasInTheLobbyWithAPipeWrench and the like.  All those web services were doing only one thing, being a tin adapter to your back-end store.  Each time I saw those it reminded me that Web Service was a young technology and SQL was a much more mature one.

With OData that changed a little since you can now query your web services, so one web service implementation should satisfy most of your client needs for read-operations.  Now, if like me you thought SOAP Web Services was a young and immature technology, wait until you meet OData and its .NET implementation WCF Data Services.

WCF Data Services has very little to do with WCF beside being an API for services exposed on the web.  Most of WCF pipeline is absent, the ABC (Address, Binding & Contract) of WCF is nowhere to be found and when it doesn’t work, you get a nice “resource not found” message on your browser as the only troubleshooting information.

Nevertheless, it does get the job done and exposes OData endpoints which are quite great and versatile.  With .NET 4.0, WCF Data Services got some improvements too.  A greater querying capabilities (e.g. ability to only count an entity set), interceptors and…  service operations.

Service operations really fills the gap missing between a SOAP web services with parameters and a simple OData where you expose an Entity set.  A service operations allows you to define an operation with parameters but where its result can be furthered queried (and filtered) by the client.

Now the client-side story isn’t as neat as WCF in general.  With WCF you can share your service and data contracts between your server and your client.  This isn’t always what you would like to do, but it’s a very useful scenario and allow you to share entities between the client and the server and enables you to share component dealing with those entities.  The key API there is ChannelFactory ; instead of using Visual Studio or a command-line tool to generate a proxy, you let that class generate it from your service contract.  It works very well.

This doesn’t exist with WCF Data Services.  You’re sort of force to generate proxies with Visual Studio or command tools.  This duplicates your types between your server and client and doesn’t allow you to have components (e.g. business logic) shared between the server and client using data entities.  As an extra, the generation tools don’t support Service Operations at all.

In order to use service operation on the client-side, you need to amend the generated proxy.  Shayne Burgess has a very good blog entry explaining how to do it.  Actually, I got inspired by that blog entry to write client proxies by hand, allowing us to share data entities across tiers.  Here is how to do it.

First you need a service operation.  You can learn how to do that here.  The twist we’re going to give here is to define an interface for the model, another one for the service operations and one containing both.  The reason to split those interfaces is due to the service operations being implemented in the data service directly while the non-service operations are in the data model itself.

public interface IPocQueryService : IPocQueryModel, IPocQueryOperations
{
}

public interface IPocQueryModel
{
    IQueryable<FileInfoData> Files { get; }
}

 

public interface IPocQueryOperations
{
    IQueryable<FileInfoData> GetFilesWithParties(int minPartyCount);
}

Now we can define our client-proxy.

public class PocQueryServiceProxy : DataServiceContext, IPocQueryService
{
    #region Constructor
    public static IPocQueryService CreateProxy(Uri serviceRoot)
    {
        return new PocQueryServiceProxy(serviceRoot);
    }

    private PocQueryServiceProxy(Uri serviceRoot)
        : base(serviceRoot)
    {
    }
    #endregion

    IQueryable<FileInfoData> IPocQueryModel.Files
    {
        get { return CreateQuery<FileInfoData>("Files"); }
    }

    IQueryable<FileInfoData> IPocQueryOperations.GetFilesWithParties(int minPartyCount)
    {
        return CreateQuery<FileInfoData>("GetFilesWithParties").AddQueryOption("minPartyCount", minPartyCount);
    }
}

On the client side, we can use the proxy as if we were talking directly to the DB.

var service = PocQueryServiceProxy.CreateProxy(builder.Uri);
var files = from m in service.Files
            where m.ID.Contains("M")
            orderby m.ID descending
            select m;
var files2 = service.GetFilesWithParties(2);

I’m not a huge fan of having so many interfaces for such a simple solution.  We could have only one and use it for the proxy only.  But then we would be loosely coupled with the server, which is what I am trying to avoid by sharing the interfaces on the client & server.

Windows 8: Desktop as a Service?

In the wild country of rumours about Windows 8, there’s a new entry:  Desktop as a Service (thanks to Mary-Jo Foley for the heads-up).  Some slides have indeed leaked from the London Microsoft architectural summit in April 2010 showing Microsoft’s vision of the next step for Windows virtualization.

The virtualization of the applications was done with App-V in Windows 7 while the virtualization of the OS is meant to mean native vhd booting.  That is already done in Windows 7 although it requires a bit of tweaking.

So we are left to speculate about Desktop as a Service (DaaS), although one of the slide gives some hints:

The desktop should not be associated with the device. (T)he desktop can be thought of as a portal which surfaces the users apps, data, user state and authorisation and access.

Now that is interesting.  With Office 365 (aka BPOS) for the server side of the apps, maybe Microsoft will eventually provide all the client apps as a service as well.

This would go a long way to resolve enterprise IT’s headaches where the migration from Windows XP is a major issue and the benefits rarely outweigh the costs.  With a more lightweight OS and DaaS, a migration would be a better value proposition.  This wouldn’t remove one of the big cost of migration though:  training.

Office Web App to power Facebook emails

The new Facebook mail service will use Microsoft Office Web App in order to view Microsoft Office documents.

This follows the news of Facebook using Bing to search the social network.

It’s interesting to see Microsoft positioning itself in the social networking space and the search space.  It recently abandoned the idea of powering Livre Blog by outsourcing it to WordPress.  I interpreted it as a sign of Microsoft moving away from the social media scene.  Apparently that was more a repositioning than a moving away.

Making alliances with Facebook might work better for Microsoft in the long run than trying to compete with Windows Live.  It’s a shame though, considering that Live Messenger is the most widespread chatting tool and somehow an ancestor of social media.

Entity Framework 4.0: POCO or POCO?

The Entity Framework shipping with .NET Framework 4.0 is a major improvement over .NET Framework 3.5 SP1 Entity Framework, which was the first version.  The first version was a nice curiosity but had many shortfall.  For instance, it didn’t support stored procedures, the designer wasn’t flexible, the produced ad hoc queries were quite hard to read, etc. .

EF 4.0 improved on all those fronts.  It is now a very usable Framework and I recommend it.

EF 4.0 also comes with POCO support.  The default usage of EF 4.0 (and EF 3.5 SP1 for that matter) generates entities which are derived from EntityObject and stuffed with attributes.  POCO on the hand lets you generate entities that are Plain Old CLR Objects (POCO).

Now, if you’re like me and you follow Scott Gu’s blog, you might have read an entry about Code-First development in Entity Framework and labelled it “POCO” in your mind.  The blog entry goes on saying it’s in CTP while the other POCO you’ve heard about shipped with .NET Framework 4.0.  What is the difference between those two technologies?

Well…  Many, but they might not impact you all.

Let’s look at the POCO support of EF 4.0.  In order to use that today, you need to change the code generation strategy on your model.  On the EF designer surface, pop the context-menu and select Add Code Generation Item.

image 

This will bring you to an empty Add Item dialog box.  Go to the online template and download the ADO.NET C# POCO Entity Generator:

image 

This will download a T4 template on your box.  This generates a slightly different set of classes.  Basically the object-context is similar but the entity sets are POCOs instead of deriving from EntityObject.

That’s it.  That’s what POCO offers you today.

Now what are the drawbacks of that technology?  The main drawback is that the entities are still generated.  So if you would like to use objects provided to you (e.g. WCF data contracts), this mechanism isn’t useful to you.

Now, if you play with the code generated a little you’ll see you can generate it yourself and therefore you fall pretty close to what Code-First development has to offer.

That is if you play with single tables with no relation.  Once you start putting tables with relations you start seeing those FixUp code popping around.  Basically, the management of items in collection isn’t exactly trivial and frankly, it’s a bit messy.  Just have a look of this antic, generated for you:

public class FixupCollection<T> : ObservableCollection<T>
{
    protected override void ClearItems()
    {
        new List<T>(this).ForEach(t => Remove(t));
    }

    protected override void InsertItem(int index, T item)
    {
        if (!this.Contains(item))
        {
            base.InsertItem(index, item);
        }
    }
}

Basically, the Entity Framework was designed on the foundation that entities were tracking themselves.  Once you start introducing POCOs, you remove that foundation and not surprisingly, the building starts to shake.

But not everything is lost, Code-First EF comes in.  With Code-First, you really get your POCO classes.  To take the examples given by Scott on his blog, here’s how your entities look like:

image

This is how the object-context looks like:

image

Now, that is POCO!

You might have notice that some classes have changed.  Instead of ObjectContext we have DbContext and instead of ObjectSet we have DbSet.  Basically, those classes redo the foundation the other POCO Framework was missing.

Now development-first doesn’t stop there.  You don’t need a mapping XML-file with this Framework.  By default, it uses convention (convention over configuration) to map your objects to tables and columns.  So if you have a table Dinner and an object Dinner, you don’t need to configure anything.  If your object context has the same name than the connection string in your web / app.config, you don’t even need to plug the two together.  Like magic!

Actually, a bonus is that the connection string doesn’t need to be an EF connection string with your SQL connection string embedded in it.  That alone is something!

You can actually have the DB schema generated from your Db-context.  Magic, I told you.

What can you do if your object-model doesn’t map exactly to your DB?  Well, you have to use an API (I don’t know if you can use the mapping file with code-first).  You actually can also annotate your POCO objects.

Now code-first is in CTP.  The fourth CTP was released in July 2010.  They are looking for a vehicle to ship it…  Maybe an SP1 of the Framework?

If you look at all of this, it seems like EF is changing.  The mapping file is slowly fading away, being replaced by an API, new annotation comes in, all the base classes change…  On the other hand, when you use it, it’s pretty much the same.

For me I’ll wait before I use code-first on a real projects.

Here are some resources:

Scott Gu: Silverlight is strategic for Microsoft

The Gu has spoken:  Silverlight is important and strategic for Microsoft.

A bit like I mentioned earlier this week, the main points of Microsoft strategy are that Silverlight is for rich client applications while HTML 5 is for broader reach.

An interesting point of Scott Gu is that many of the new devices don’t even have an open development platform, so that HTML is the only option.

So it would seem that for the time being, rich development application will be contained in silos on different platform.  Nothing surprising there.  But I wonder how long it will take before the cost of the multiplication of form factors catch us.