Wednesday, September 23, 2015

Microservices - decomposition writ small

Building robust enterprise solutions requires thinking differently. For many, service oriented architecture is the way to go. But recently, the influx of microservices challenges the idea that there's only one way to skin the proverbial Information Technology cat. As Torsten Winterberg puts it, “Microservices are the kind of SOA we have been talking about for the last decade. Microservices must be independently deployable, whereas SOA services are often implemented in deployment monoliths. Classic SOA is more platform driven, so microservices offer more choices in all dimensions.”

I have always been of the mind that service oriented architecture (SOA) represents an architectural pattern for software design where application components provide services to other components via a communications protocol, typically over the internetwork. Today, many embrace the idea of micro services -- in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. Now, this seems to be splitting hairs, but in practice the concept of microservices might be hard to distinguish from SOA principles. Think of it this way: With SOA, one could write business logic to query a dataset to GetPaymentsAndCustomerInformationAndPurchaseHistoryDataAPI and AuthenticateUsersAPI. The approach from a microservices pattern would simply be to deconstruct those two APIs into much smaller units (GetCustomer; SubmitPassword). The net result is of course the same transactional processing and data traversing the wire, but in smaller increments. And new uses for the multiple APIs could be found, perhaps.

Perhaps the biggest gains in taking a microservices approach resides in cloud-deployed applications. Monolithic applications are sometimes wholesale moved to the cloud, but every time a rev is made, even to a small part of the app, the entire solution to be rebuilt, tested and deployed. With the alternative, smaller units are revised with agility, and downtime is reduced or eliminated.

Friday, September 11, 2015

Queuing - Single Queue Works, But Why Doesn't Everywhere Do That?

Queuing theory utilizes applied mathematics to deal with the phenomenon of waiting -- arising from the use of mathematical analysis to improve production processes. So why doesn't McDonald's utilize this approach? Customer what times over 90 seconds can be problematic. But perceived wait time is more critical - like page loads in your web browser. If the UI/UX designer has come up with a novel way of loading content, a user will wait out the progress bar. Or, if the content is so compelling (think, your bank account, or cat videos).

But it helps to think of getting your french fry fix take-out as involving a series of work stations, each with a separate task. And each task takes time (e.g. ordering food, instructing workers, retrieving hot fries, putting into packaging food, payment). These stations are generally attended in sequence, and each station takes some time to process one customer. The sequence of stations is a pipeline. But some steps take longer than others -- so building in wait time at certain points actually serves to move the production process along without bottlenecks. McDonalds provides several queues in parallel, the first for ordering and paying, and the second, an (invisible) station where customers wait while their food is gathered and served. The time it takes to cook the food is accounted for in the time taken to gather the food items.

The same analysis can be applied to packet switching with internetworking, or with automobile assembly. For my masters work, I looked at a supplier to a Japanese auto manufacturer -- with a supply chain represented as a multi-input, multi-stage queuing network. An input order to the supply chain was represented by stochastic variables, for the occurrence time and for the quantity of items to be delivered in each order. I had seen such an approach when learning about the (now sunsetted) wide area network at the central bank, where I was involved with information security. A "star" network topology has a central top level node that all other nodes connect to. "Packets" are passed through the central node. This helped me understand alternate ways of queuing -- something I have carried forward with my efforts at automating workflow in Bluedog's SAAS offering, where 'jobs' have to be passed from one stage to another, based on business rules.

The typical first-come, first-serve system of waiting in line is incredibly inefficient, in terms of both time and space. First, it essentially rewards people for wasting their time: Those who arrive first get the goods, but they also spend more hours of their precious time on Earth standing around and waiting. Second, long lines tend to create congestion and bottlenecks that cause problems for others. Think of the traffic jams that form as cars try to leave a football game, or the long boarding line at an airport that snakes across the walkway, getting in everyone else's way.

Read more here... Danish Researcher Report or read this guy's ideas.