Monday, July 27, 2015

The Polarizer Pattern

A polarizer is a filter used in optics to control beams of light by blocking some light waves while allowing others to passthrough. Polarizers are found in some sunglasses, LCDs and also photographic equipment.

When it comes to managing an API there is often the need to control what bits of an API gets exposed and what does not. However, this kind of control is generally done at a Gateway that supports API Management rather than at the back-end API which may provide many more functions that never get exposed to a consumer.

A good example, is a SOAP or REST service that is designed to support a web portal which you also want to expose as an API so that 3rd parties can build their own applications based on it. Though your API may provide many functionalities in support of your web portal, you may not want all of these functionalities available to the 3rd party application developer for one of many reasons. And, based on many cases (as in this example), you’d find that this pattern is most useful if you find that you have an existing capability that is currently being used for some purpose, which also has to be exposed for another purpose but with restricted functionality.

While The Polarizer might be treated as a special kind of an Adapter, the key for differentiating the two in terms of API Management is based on how these two patterns would be implemented. While adapter may expose a new interface to an existing implementation with new capabilities or logic combining two or more existing capabilities, a polarizer will simply restrict the number of methods exposed by an existing implementation without altering any of its functionalities. The outcome of The Polarizer may also be similar to a Remote Facade. But, unlike the remote facade, the purpose of a polarizer is not to expose a convenient and performance-oriented coarse-grained interface to a fine-grained implementation with an extensive number of methods; it is to purely to restrict the methods from being accessible in a given context.

The polarizer also fits alongside patterns such as Model-View-ViewModel (MVVM) and Model-View-Presenter (MVP). Unlike these patterns, which are designed to build integration layers to connect front-ends with back-ends, the focus of the polarizer is to control what is being exposed from a back-end without making any consideration in favour of a front-end. We however may find situations where an implementation of an MVVM or MVP pattern also performing tasks of a polarizer.


The graphic below explains how The Polarizer pattern can be implemented by a typical API Management platform. In such an implementation, the Gateway component will simply polarize all incoming requests through some sort of filter, which may or may not be based on some configurable policy.


The WSO2 API Manager provides the capability to configure what API resources are being exposed to the outside world and thereby polarize the requests to the actual implementation. Polarization may not necessarily be a one-time activity for an API. You may decide on a later date to change what methods are exposed. The WSO2 API Manager allows you to do such reconfiguration via the Publisher portal.


Monday, November 24, 2014

State of Development vs. State of Availability

Runtime Governance is a broad area of focus and involves many different kinds of people and processes. The complexity of runtime governance is perhaps the main contributor to why most projects are either fully or partially unsuccessful in meeting their objectives. If you consider the people involved, there are a variety of roles including project managers, DevOps, and also C*-executives who are interested in the outcomes of runtime governance. In terms of processes yet again there are many, such as continuous integration, system monitoring, and analytics for understanding overall performance, generated value, and ROI.


While there are several aspects that require attention to get runtime governance right, one of the most important aspects is having a proper lifecycle management strategy. This is also perhaps the most misunderstood area in terms of runtime governance. The whole idea of a design/development lifecycle is to keep track of a project’s progression from Concept to Production. But once in production, such a lifecycle is not really going to help. However, more user-oriented systems such as API Stores and Gateways also require a concept of a lifecycle to manage a running system. This is not focusing on the development of a project but on its availability - for it to be used or accessed by an end-user. This is what gives rise to two separate kinds of state that you need to keep track in a system, namely, the State of Development and the State of Availability.

The State of Development is all about keeping track of whether a project is ready to go into a production or a “running live” setting. This involves lining up the development and continuous integration processes, as to whether the project is built properly, whether best practices have been followed and whether proper testing has been done. The lifecycle itself might be fully-automated, semi-automated or even manual. The level of automation does not create any harm in terms of finding answers to the kinds of questions related to readiness, however, automation can reduce a significant proportion of human error and produce more robust outputs within strict timelines. The only downside of automation is that it really leaves little room for manual override and limits the agility of the project creating a scenario where "system drives human" rather than "human drives system”.

The State of Availability is all about understanding whether your project is ready to be accessed by the outside world (or the world beyond the project team). Now, the interesting fact is that most projects become already accessible well before they go into the Production state, and you’ll often find conflicting situations with most of the all-in-one linear and continuous lifecycles that attempt to merge the concepts of development and availability together. This creates situations where process and tool don’t tend to be fitting. This leads to development teams exploring into their own sets of workarounds to make things happen. However, in a lifecycle management system that is well designed, the possibility of things available and the ability to keep track of development should both be possible at the same time. But, these concepts are not fully orthogonal, and the teams themselves should be able to decide on how these two things connect to each other.

Therefore, to solve the problem of two kinds of state, the lifecycle management of your project should be designed such that it takes both of these things into consideration. Both of these kinds of state will have multiple stages of progression and they will require concepts of checklists, validations, approvals and permissions for the model to be meaningfully governed. Therefore, from the tool’s point of view, there should exist the ability to support multiple parallel lifecycles at the same time, which can be separately tracked and monitored. Such a Governance Framework will be able to support both Continuous Integration Systems and Enterprise Asset Registries at the same time.

Saturday, August 16, 2014

API Management for OData Services

The OData protocol is a standard for creating and consuming Data APIs. While REST gives you the freedom of choice to choose how you design your API and the queries you pass to it, OData tends to be a little bit more structured but at the same time more convenient, in terms of exposing data repositories as universally accessible APIs.


However, when it comes to API Management for OData endpoints, there aren’t many good options out there. WSO2 API Manager makes it fairly straightforward for you to manage your OData APIs. In this post, we will looking how manage a WCF Data Service based on the OData protocol using WSO2 API Manager 1.7.0. The endpoint that I have used in this example is accessible at http://services.odata.org/V3/Northwind/Northwind.svc.

Open the WSO2 API Publisher by visiting https://localhost:9443/publisher on your browser. Login with your credentials and click on Add to create a new API. Set the name as northwind, the context as /northwind and the version as 3.0.0 as seen below. Once done, click on the Implement button towards the bottom of your screen. Then click Yes to create a wildcard resource entry and click on Implement again.

Please note that instead of creating a wildcard resource here, you can specify some valid resources. I have explained this towards the end of this post.


In the next step, specify the Production Endpoint as http://services.odata.org/V3/Northwind/Northwind.svc/ and click Manage. Finally, select Unlimited from the Tier Availability list box, and click Save and Publish. Once done, you should find your API created.

Now Open the WSO2 API Store by visiting https://localhost:9443/store on your browser, where you should find the northwind API we just created. Make sure you are logged in and click on the name of the northwind API, which should bring you to a screen as seen below.


You now need to click on the Subscribe button, which will then take you to the Subscriptions page. In here, you need to click on the Generate button to create an access token. If everything went well, your screen should look something similar to what you find below. Take special note of the access token. Moving forward, you will need to make a copy of this to your clipboard.


The next step is to try the API. You have several choices here. The most convinient way is to use the RESTClient tool which comes with the product. You simply need to select RESTClient from the Tools menu on the top. To use this tool, simply set the URL as http://localhost:8280/northwind/3.0.0/Customers('ALFKI')/ContactName/$value and the Headers as Authorization:Bearer TOKEN. Remember to replace TOKEN with the access token you got from the step above. Once you click Send, you should see something similar to the screenshot below.


Another easy option is to use curl. You can install curl on most machines and it is a very straightforward command line tool. After having installed curl, run the following command in your command line interface:
curl -H "Authorization:Bearer TOKEN" -X GET "http://localhost:8280/northwind/3.0.0/Customers('ALFKI')/ContactName/$value"
Remember to replace TOKEN with the access token you got from the step above.

For more challenging queries, please read through Microsoft’s guidelines on Accessing Data Service Resources (WCF Data Services). Remember to replace http://services.odata.org/Northwind/Northwind.svc with http://localhost:8280/northwind/3.0.0 in every example you find. For the RESTClient, note that you will have to replace " " with "%20" for things to work. Also, for curl note that on some command line interfaces such as the Terminal in your Mac OS X, you might have to replace "$" with "\$" and " " with "%20" for things to work.

In the very first step, note that we used wildcard resource. Instead of that, you can specify some resources to control what types of access is possible. For example, in the list of queries mentioned in the link above, if you want to allow the queries related to Customers but not the ones related to Orders, you can setup a restriction as follows.

Open the WSO2 API Publisher by visiting https://localhost:9443/publisher on your browser. First click on the northwind API and then click the Edit link. Now at the very bottom of your screen, in the Resources section, set URL Pattern to /Customers* and Resource Name to /default. Then click Add New Resource. After having this done, click on the delete icon in front of all the contexts marked /*. If everything went well your screen should look similar to the following.


Finally, click on the Save button. Now, retry some of the queries. You should find the queries related to Customers working well but the queries related to Orders failing unlike before. This is a very simple example on how to make use of these resources. More information can be found in here.

Please read the WSO2 API Manager Documentation to learn more on managing OData Services and also other types of endpoints.

Sunday, August 3, 2014

The right Governance Tools are key to the right Level of Maturity

Governance is part and parcel of any enterprise of the modern world. Knowingly or unknowingly, every single employee is a part of some form of corporate governance. Having the right tools and frameworks not only helps but also ensures that you design, develop and implement the best governance strategies for your organisation.


The types of tools and the approach required for governance varies significantly depending on the level of maturity of an organisation. The Capability Maturity Model Integration programme explains several levels of maturity an organisational process can be in. It is ideal for all organisations to eventually reach the optimum state in terms of all its processes, but it is not always required and can also be very expensive if over done.

The key to understanding what level of governance is needed is to find where your organisation is in terms of its level of maturity. And, you may choose different types of governance products for different types of process requirements. When selecting right tool or framework, you should not only focus on what the product is capable of and how much it costs, but also what types of metrics can they provide for you to iteratively improve the maturity of your organisation.

A basic registry/repository solution that can be used to capture requirements, group them into projects and provide some analytics around what they provide can only help you get past the second level of maturity. Similarly, the most advanced deployment composed of multiple products of multiple vendors in combination of a series of home-grown solutions, will not only burn a lot of your finances but also will end up taking a lot of time on establishing and maintaining these processes.

It takes a lot of thinking and planning, and the right mix of products as well as expertise will be required. To open doors for the next level of maturity, your company will need the right governance solution that is unique to your requirements. Most of the work done and organisational transformation happens within the third level of maturity and the journey beyond is not so difficult. But, this what requires proper understanding and planning. And, making the right choice of toolset will be pivotal towards taking your organisation to the most optimum level of maturity.

Therefore, it is crucial that you pay attention to requirements of later stages early enough to help you invest the right amount of time and money before starting to take your organisation to the next level of success.

Sunday, July 27, 2014

Securing the Internet of Things with WSO2 IS

The popularity of the Internet of Things (IoT) is demanding for more solutions to make it easier for users to integrate devices, with a wide-variety of on-premise and cloud services. There are many existing solutions which makes integration possible, but there are many gaps in several aspects including usability and security.


Node.js

Node.js is a runtime environment for running JavaScript applications outside a browser environment. Node.js is based on the technology of the Google Chrome Browser. Node.js runs on nearly all the popular server environments include both Linux and Windows. Node.js benefits from a efficient, light-weight, non-blocking I/O model which is event-driven. This makes it an ideal fit for applications running across distributed devices.

Node.js also features a Package Manager, npm, which makes it easier for developers to use a wide variety of third-party modules in their application with ease. The Node.js package repository boasts to have over 85,000 modules. The light-weight and lean nature of the runtime environment also makes it very convenient to develop as well as host applications.

Node-RED

Node-RED is a creation of IBM’s Emerging Technology group and is position as a visual tool for wiring the internet of things. Based on Node.js, Node-RED focuses on modelling various applications and systems in a graphical flow making it easier for developers to build ESB-like integrations. Node-RED also uses Eclipse Orion making it possible to develop, test and deploy in a browser-based environment. Node-RED uses a JSON-based configuration model.

Node-RED provides a number of out-of-the-box nodes including Social Networking Connectors, Network I/O modules, Transformations, and Storage Connectors. The project also maintains a repository of additional nodes in GitHub. The documentation is easy to understand and introducing a new module is fairly straightforward.

WSO2 Identity Server

WSO2 Identity Server is a product designed by WSO2 to manage sophisticated security and identity management requirements of enterprise web applications, services and APIs. The latest release also features an Enterprise Identity Bus (EIB), which is a backbone that connects and manages multiple identities and security solutions regardless of the standards which they are based on.

The WSO2 Identity Server provides role-based access control (RBAC), policy-based access control, and single sign-on (SSO) capabilities for on-premise as well as cloud applications such a Salesforce, Google Apps and Microsoft Office 365.

Integrating WSO2 Identity Server with IBM Node-RED

What’s good about Node-RED is that it makes it easy for you to build an integration around hardware, making it possible to wire the internet of things together. On the other hand, the WSO2 Identity Server makes it very easy to secure APIs and applications. Both products are free to download and use and is based on the enterprise-friendly Apache License, which even makes it possible for you to repackage and redistribute. The integration brings together the best of both worlds.

The approach I have taken is to introduce a new entitlement node on Node-RED. You can find the source code on GitHub. I have made use the Authentication and Entitlement administration services of WSO2 IS in my node. Both of these endpoints can be accessed via SOAP or REST. Most read-only operations can be performed using an HTTP GET call and modifications can be done using POST with an XML payload.

The code allows you to either provide credentials using a web browser (using HTTP Basic Access Authentication), or to hard-code it in the node configuration. The graphical configuration for the entitlement node allows you to choose whether either or both of authentication and entitlement. Invoking the entitlement service also requires administrative access, and these credentials can either be provided separately or the same credentials used for authentication can be passed on.

Example Use-cases

To make it easier to understand I have used Node-RED to build an API that will let me expose a the contents of a file on my filesystem. The name of the file can be configured using the browser. This is a useful technique when designing Test Cases for processing hosted files or for providing resources such as Service Contracts and Schemas. I have inserted my entitlement node into the flow to ensure access to the file is secured.
The configuration as seen below will both authenticate and authorize access to this endpoint. I have also provided the administrative credentials to access the Entitlement Service and also uploaded a basic XACML policy to the WSO2 Identity Server.
When you access the endpoint, you should now see a prompt requesting your credentials.
Only valid user accounts that have been setup on WSO2 Identity Server will be accepted. Failed login attempts, authorizations and other errors will be recorded as warnings on Node-RED. These can be observed both on the browser as well as the command prompt in which you are running the Node.js server.