I am currently working on a very exciting project involving systems integration across the Azure Messaging Service Bus. I thought I would share some of the painfully acquired knowledge nuggets with you.
About %90 of examples you’ll find on Internet uses Azure Bus SDK with ‘owner’. That is basically ‘admin’ privilege because owner has read/write AND manage on an entire Service Bus namespace.
Although that is ok nice to get use to the SDK, but isn’t a very secure setting for a production environment. Indeed, if the owner credentials get compromise, it would compromise the entire namespace. To top it, Microsoft recommends not to change the password & symmetric key of the owner account!
So what is it we can do?
Entities in Service Bus (i.e. Queues, Topics & Subscriptions) are modelled as relying parties in a special Azure Access Control Service (ACS): the Service Bus trust that its buddy-ACS, i.e. the one having the same name with a -sb happened to it, as a token Issuer. So access control is going to happened in that ACS.
You do not have access to that ACS directly, you must pass by the Service Bus page:
Once on that ACS, you can find the Service Identities tab:
And there, you’ll find our friend the owner:
So owner is actually a Service Identity in the buddy-ACS of the Service Bus.
Now, let’s look at the relying parties:
As I said, Relying parties represents Service Bus’ entities. Basically, any topic is the realm:
while any subscription is
http://<namespace>.servicebus.windows/net/<topic name>/Subscriptions/<subscription name>
But there is a twist: if you do not define a relying party corresponding exactly to you entity, ACS will look at the other relying parties, basically chopping off the right hand side of the realm until it finds a matching realm. In this case here, since I haven’t define anything, the root of my namespace is the fallback realm.
If we click on Service Bus, we see the configuration of the Service Identity and at the end:
The permissions are encoded in the rules. A rule is basically an if-then statement: if that user authenticates against this relying party, emit that claim. For Service Bus, the only interesting claim type is net.windows.servicebus.action:
So here you have it. Service bus performs access control with the following steps:
- Check ACS for a relying party corresponding to the entity it’s looking at
- If that relying party can’t be found, strip url parts until finding one
- ACS runs the rules of the relying party with the Service Identity of the consumer
- ACS returns a SWT token with claims in it
- Service Bus looks for the claim corresponding to the action it requires to do: Listen (receiving messages), Send & Manage.
So… if you want to give access by a specific agent (e.g. web role) to send messages on a topic, you create a Service Identity for the agent and create a relying party corresponding to the topic. You then enter a rule that emits a Send action and you should be all set.
This requires you to store secrets about to (Service) entity in the agents.
In my last entry about REST web services I talked about its biggest weakness for me: the lack of description model of REST services.
The idea of hitting an HTTP endpoint as a shot in the dark is for me quite a leap of faith, and very likely an invitation to spend hours troubleshooting.
Web Service Definition Language -> WSDL
Web Application Description Language -> WADL
So WADL aims to be the WSDL of REST.
No other parties seem to have backed it, so it seems deemed to join the junkyard of unilateral attempt at standardizing global assets!
You can look up at an example on Wikipedia.
Maybe we’ll have another standard one day. Or maybe it’s a non-issue and I’m the only one to worry about it.
Once upon a time there was SOAP. SOAP really was a multi-vendor response to CORBA. It even share the same type of acronym, derived from object. Objects are so 90′s dude… The S in SOAP stands for Simple by the way. Have a go at a bare WSDL and try to repeat in your head that it is simple…
Then REST came along. I remember reading about REST back in 2002. It was a little after Roy Fielding‘s seminal article (actually his PhD thesis). Then there were a few articles about how SOAP bastardized the web and how XML RPC was so much better. But like the VHS vs Betamax battle before, the winner wasn’t going to be chosen on technical prowess. At least not at the beginning.
Then I stopped hearing about REST in 2003 and started seeing SOAP everywhere. We implemented it like COM+ interfaces really. A classic in the .NET community was to through Datasets on the wire via SOAP services. That really was a great way to misuse a technology… Ah… the youth… (a tear).
Microsoft tried to correct the trajectory by introducing WCF which enforced, or at least strongly suggest, a more SOA approach with a stronger focus on contracts and making boundaries more explicit. But somehow it was too late… something else was brewing beneath the SOA world…
In 2007, REST came back into fashion but now it was mainstream, i.e. people didn’t understand it, misquote it and threw it everywhere. Basically, it was: cool man, no more bloody contracts, I just send you an XML document, it’s so much simpler! Which of course works awesomely for 2-3 operations, then you start to get lost without a service repository because there are no explicit documentation!
If you see a parallel with the No-SQL movement (cool man, no more bloody schema, I just throw data in a can without ceremony, it’s so much simpler), I got no idea what you are talking about.
Anyway, if it wasn’t obvious, I’m not at all convinced that REST services solve that many issues by themselves. Ok, they don’t require a SOAP stack which make them appealing for a broader reach (read browser & mobile). But without the proverbial Word document next to you to know which service to call and to do something with, they aren’t that easy to use.
Then, finally, came Hypermedia API… I’ve a few articles about those, including the very good Designing and Implementing Hypermedia APIs by Mike Amundsen. I found in Hypermedia APIs the same magic I found when looking at HTML the first time: simple, intuitive & useful.
Hypermedia APIs are basically REST Web Services where you have one (or few) entry doors operations and from which you can find links to other operations. For instance, a list operation would return a list of items and each item would contain a URL pointing to the detail of the item. Sounds familiar? That’s how a portal (or dashboard) work in HTML.
Actually, you already know the best Hypermedia API there is: OData. With OData, you group many entities under a service. The root operation returns you a list of entities with a URL to an operation listing the instances of those entities.
The magic with Hypermedia APIs is that you just need to know your entry points and then the service becomes self-documented. It replaces a meta data entry (a la WSDL) with the service content itself.
The difference between now and the 2000′s when SOAP was developed is that now we really do need Services. We need them to integrate different systems within and across companies.
SOAP failed to deliver because of its complexity but mostly because it’s a nightmare to interoperate (ever tried to get a System.DateTime .NET type into a Java system? Sounds trivial, doesn’t it?).
REST seems easier on the surface because it’s just XML (or JSON). But you do lose a lot. The meta-data but also the WS-* protocols. Ok it was nearly impossible to interoperate with those but at least there was a willingness, a push, to standardise on things such as security & transactions. With REST, you’re on your own. You want atomicity between many operations? No worries, I’ll bake that into my services! It won’t look like any else you’ve ever seen or are likely to see though.
Mostly, you lose the map. You lose the ability to say ‘Add Web Reference’ and have your favorite IDE pump the metadata and generate nice strongly type proxies that will show up in intellisense as you interact with the proxy. Sounds like a gadget but how much is Intellisense responsible for the discovery of APIs for you? For me, it must be above %80.
Hypermedia API won’t give you Intellisense, but it will guide you in how to use the API. If you use it in your designs, you’ll also quickly find out that it will drive you to standardise on representations.
I’ve been publishing this NuGet package.
Ok, so why do yet another ePub library on NuGet when there are already a few?
Well, there aren’t that many actually and none are Portable Class Library (PCL).
So I’ve built an ePub library portable to both Windows 8+ & .NET 4.5.1. Why not Windows Phone? My library is based on System.IO.Compression.ZipArchive which isn’t available on Silverlight in general. That being said, what would be a use case to generate an ePub archive on a smart phone?
I have in my possession a Kobo Touch (yes, my Canadian fiber got involved when I chose the Kobo). I love to read on it: it is SO much more relaxing for my eyes than a tablet. It’s like reading a book but where I can change the content all the time. You see I use it to read a bunch of technical articles on public transport, so I upload new stuff all the time.
I wanted to automatize parts of it and hence I needed an ePub library. I would like to embed that code in a Windows App at some point (this is mostly pedagogical for me you see) so I needed something PCL.
Anywho, two technical things to declare:
1. ePub is complicated!
If you ever want to handcraft an ePub, use an ePub validator such as the excellent http://www.epubconversion.com/ePub-validator-iBook.jsp. Otherwise the ePub just doesn’t work and ePub tools (either eReader or Windows App) are quite silent about the problems.
The biggest annoyance for me was the spec that says you should have your first file starting at byte 38. This is the mime type of ePub and is meant to be a sort of check, i.e. no need to open the archive (an ePub is a zip file underneath) for a client to check, simply go at byte 38 and check you have the ePub mime type to validate you have a valid ePub in your hand.
Well, for that you need to write the mime type file first AND not compress it. Apparently that’s too much for System.IO.Compression.ZipArchive. I really needed that library since it works in async mode. So I did a ‘prototype’ epub file with only the mime type using another zip library (the excellent DotNetZip) and used that prototype as the starting point of any future ePub!
2. My first NuGet package
Yep! So I went easy on myself and downloaded a graphic tool, NuGet Package Explorer.
I didn’t use much NuGet feature besides embedding the XML comment file in the NuGet package.
It’s quite cool to handle packages the NuGet way. You can update them at will completely independently…
I’ve written (a year and a half ago actually) a blog series around all the concerns and issues around implementing SOA in an Enterprise Solution:
Very interesting article from Mark Russinovich dating from Decembre 1998:
Russinovich explains how David Cutler lead Windows NT after leaving digital and how he and his team borrow from their work on VAX machines.
The article goes on explaining the similarities between Windows NT & VMS (OS of VAX) and by doing so goes into some of the low level details of Windows NT with Russinovich’s talent to explain complex systems in few lines.
A must read!
Among the flurry of new features of Windows Server 2012 R2 is Windows Azure Pack (WAP).
I’ve read about that product but I didn’t install it yet and therefore do not have hands-on experience yet.
Microsoft positions this new offering within its Cloud OS vision:
- Customer, the consumer of cloud services in this People Centric IT vision
- Windows Azure, Microsoft Public Cloud offering
- Service Provider¸ third parties providing services in the cloud
Windows Azure Pack sits squarely between the last two: it brings Windows Azure to Service Providers.
WAP is part of Windows Server 2012 R2 System Center. It doesn’t involve additional cost and leverages System Center Virtual Machine Manager (VMM). It is an on premise, private cloud system.
To me this is very exciting.
For years we’ve heard about private cloud. Until now the private cloud sat in the gap between public cloud offering (Windows Azure, Amazon, etc.) and virtualization platform (Microsoft Hyper-V, VMware).
It was basically: “take your favorite virtualization platform and build an entire self-provisioning system taking care of storing virtual images gallery, letting end-user provision their own workload, monitor, alert, bill, etc.”. I’ve seen companies taking up the challenge: the cost were steep, the timeline delayed and the result disappointing. They never reached fully self-provisioned state and never went beyond hosting virtual machines.
In that WAP is a game changer. It’s your full Private Cloud solution on a CD. Microsoft published a White Paper on WAP.
Here are the main parts of Windows Azure Pack:
Management portal for Tenants: this is the on premise equivalent to the Windows Azure Developer portal. The resemblance is quite striking:
It allows customer to self-provision different workload (more on workload in a moment). Once workloads are provisioned, customer can then manage and monitor it from that same portal.
Management Portal for Administrators: this portal allows administrator to manage the entire Data Center with its different tenants.
Service Management API: a set of REST Services giving programmatic access to the two portals. For instance, a customer could provision a Virtual Machine using a Service API, bypassing the portal. This allows for some original automation scenarios.
I’ve talked about workloads. Here they are. WAP sports a subset of the services available on Windows Azure, namely:
- Web Sites
- Virtual Machines
- Service Bus
Those address popular scenarios. Notably missing from that list:
- Cloud Services, the original Windows Azure offering coming in Web and Worker role flavor. It is said that Web Sites are implemented using Cloud Services in Windows Azure so there are good reasons to believe it is also the case on WAP. If that’s the case, Cloud Services are likely to surface in future releases.
- Virtual Network, which allows tenants in Windows Azure to bring Azure virtual machines on their network. Although it isn’t mentioned explicitly in the white paper, it seems to be on the tenant portal (see above). Without Virtual Network, many Enterprise scenarios would be difficult to realize, leaving virtual machines in a foreign network.
- SQL Databases, one of the most popular features in Windows Azure. Again, although not mentioned in the White Paper it is on the console’s screen shot. SQL Server is also a prerequisite for installing WAP since the portal uses SQL Server as a back-end store.
- Active Directory, Windows Azure integration to on premise Active Directory and claims based solution. Without this, integration with customers’ directory would be difficult again making certain Enterprise scenarios difficult.
Still, the 3 official workloads are quite a good start for a Private Cloud story. Virtual Machines are certainly going to be the most popular at first while Web sites offer a more PaaS story for more scalable solutions and Service Bus offers a scalable message-based integration solution.
Now what is the operational model of WAP?
Basically, WAP Admins creates plans. A plan contains available services and quotas. For instance, a basic plan could contain virtual machines with up to 5 VMs per tenant with a total of 500Gb of storage.
Customers then subscribes to plans and provisions services within those plans.
This is summarized in the following picture:
This is a neat model since a Private cloud typically doesn’t have infinite capacity. The notion of quotas allows administrator to give sand boxes to customers where they can scale within manageable limits. Plans could be adjusted on demand but via a manual process.
For me, WAP represents a breakthrough in the Private Cloud solutions and is a great opportunity for Service Providers and Enterprises alike.
The preview was announced in July 2013 and general availability is planned, as the rest of Windows Server 2012 R2 for January 2014.