Monday, June 17, 2019

How to Make Continuous Delivery a Reality in an Agile Environment

Continuous integration and continuous delivery (CI/CD) are core elements of successful DevOps. Systems engineers may start with CI because it is familiar. With a DevOps focus, organizations uncover configuration, packaging, and orchestration that are necessary to effective software development life cycle (SDLC). This empowers developers, administrators and engineers to create valuable CD practices, adding to agility.

Where less experienced developers might struggle with CI/CD performance, testing delays and other bottlenecks, the enterprise would do well to develop processes and best practices to make DevOps in the cloud a value-driven methodology. To save money, this will shorten the SDLC — because CD is all about updating web services. In public clouds such as AWS and Azure, this is done through pipeline stages (e.g. dev, test, staging and production). When containers are implemented with a platform-as-a-service (PAAS) approach, stages become sandbox environments, scratch instances, and production instances.

The benefit of such an approach is that the work outputs and products themselves benefit from flexibility. Regular face-to-face interactions and collaborations between team members are conducted to ensure the scrum teams level-set expectations. Finally, add value by continuous delivery throughout the life cycle, so that the end product is more secure and more reliable. Implementing an agile manifesto tracks with addressing evolving end user requirements. 

For CD, ensure user stories are married correctly to those requirements and that each story rolls up to an Epic that represents a standalone feature. This enables the devops team to release reasonably sized components of functionality that are consumable by users. These are also traceable back to the release plan. We want to ensure verification at each stage because this process defines acceptance criteria — so the stakeholders know when something is declared “finished.”

Schema, user interface, access control rights and static resources such as icons and images are all part of the creation process and we manage them just as diligently as source code. The DevOps team checks assets into a version control system as a single source of truth (GIT or Subversion). This benefits the client by ensuring that developers are making changes in a segregated environment — catastrophic failures are completely avoided with such approach, and integration into a risk management-based security framework is seamless. 

The organization should understand automated quality processes are essential — Selenium is a go-to tool for testing functionality. There are several verifications to make before functional testing. Static code analysis tools, such as PMD, are essential to ensure code conforms to a single style. Unit test coverage is also essential — establish a set of Key Performance Inidcators (KPIs) for coverage of at least 75% of code. Finally, after these automated tests pass, implement a manual peer review. This enables seasoned developers  to spot opportunities for performance improvement where automated tools can’t.

Monday, June 10, 2019

GDPR will impact more than privacy

Similar to how GDPR hugely impacted how millions of organizations handle personal data when it was enforced last year, Strong Customer Authentication (or SCA) will have profound implications for how businesses handle online transactions and how we pay for things in our everyday lives when it is enforced on September 14.

SCA will require an extra layer of authentication for online payments. Where a card number and address once sufficed, customers will now be required to include at least two of the following three factors to do anything as simple as ordering a taxi or pay for a music streaming service. Something they know (like a password or PIN), something they own (like a token or smartphone), and something they are (like a fingerprint or biometric facial features).

Without careful preparation, failed transactions and additional friction may have a significant negative impact on conversion rates.

Monday, April 29, 2019

How Software Was Egregiously (and Poorly) Used to Hide Major Engineering Deficiencies

In this article on IEEE Spectrum, we read:

It is astounding that no one who wrote the MCAS software for the 737 Max seems even to have raised the possibility of using multiple inputs, including the opposite angle-of-attack sensor, in the computer's determination of an impending stall. As a lifetime member of the software development fraternity, I don't know what toxic combination of inexperience, hubris, or lack of cultural understanding led to this mistake. But I do know that it's indicative of a much deeper problem. The people who wrote the code for the original MCAS system were obviously terribly far out of their league and did not know it.
So Boeing produced a dynamically unstable airframe, the 737 Max. That is big strike No. 1. Boeing then tried to mask the 737's dynamic instability with a software system. Big strike No. 2. Finally, the software relied on systems known for their propensity to fail (angle-of-attack indicators) and did not appear to include even rudimentary provisions to cross-check the outputs of the angle-of-attack sensor against other sensors, or even the other angle-of-attack sensor. Big strike No. 3... None of the above should have passed muster. None of the above should have passed the "OK" pencil of the most junior engineering staff... That's not a big strike. That's a political, social, economic, and technical sin... 
The 737 Max saga teaches us not only about the limits of technology and the risks of complexity, it teaches us about our real priorities. Today, safety doesn't come first -- money comes first, and safety's only utility in that regard is in helping to keep the money coming. The problem is getting worse because our devices are increasingly dominated by something that's all too easy to manipulate: software.... I believe the relative ease -- not to mention the lack of tangible cost -- of software updates has created a cultural laziness within the software engineering community. Moreover, because more and more of the hardware that we create is monitored and controlled by software, that cultural laziness is now creeping into hardware engineering -- like building airliners. Less thought is now given to getting a design correct and simple up front because it's so easy to fix what you didn't get right later.
The article also reveals that: "not letting the pilot regain control by pulling back on the column was an explicit design decision. Because if the pilots could pull up the nose when MCAS said it should go down, why have MCAS at all?  ...MCAS is implemented in the flight management computer, even at times when the autopilot is turned off, when the pilots think they are flying the plane." 

Tuesday, April 23, 2019

Knowledge Worker Productivity Improvements with Machine Learning

Leveraging machine learning to enhance capabilities that can recognize context, concepts, and meaning means there are interesting new opportunities for collaboration between knowledge workers and computational power. For example, Bluedog’s experts can now provide more of their own input for training, quality control, and fine-tuning of algorithm-based outcomes. We use the computational power of our servers to augment the expertise of human collaborators — this helps to create new areas for our experts to leverage.

For example, at Bluedog, we utilize several algorithm-based tools to help us quickly assess opportunities for our clients. We extract information from Word Documents locally for multiple uses. With one tool, we take advantage of each Word document’s XML metadata. From there, we use a regex library to find each targeted word or phrase in the document, then adding them to a list. Our toll then performs for-loops to scan for relevant patterns in the XML to extract data.

Knowledge workers — the staff or consultants who reason, create, decide, and apply insight into non-routine cognitive processes — can contribute to redesigning work process roles and team member roles. Consider financial auditing, where AI is likely to become pervasive. Often, when AI offers a finding, the algorithm’s reasoning isn’t obvious to the accountant, who ultimately must offer an explanation to a client — characteristic of the “black box” problem. To improve this outcome, Bluedog recommends providing an interface so experts to enter concepts they deem important into the system and be provided with a means to test their own hypotheses. In this way, we recommend making models accessible to common sense. 

As cybersecurity concerns mount, organizations have increased the use of instruments to collect data at various points in their network to analyze threats — and to address “Internet-of-Things” (IoT) devices. However, many of these data-driven systems do not integrate data from multiple sources. Nor do they incorporate the common-sense knowledge of cybersecurity experts, who know the range and diverse motives of attackers, understand typical internal and external threats, and appreciate the degree of risk to an organization. 

Bluedog’s experts specify the use of Bayesian models — which employ probabilistic analysis to capture complex interdependence among risk factors —  combined with expert systems judgment. In cybersecurity for enterprise networks, complex factors may include large numbers and types of devices on the network. It is crucial to access the knowledge of the organization’s security experts about attackers and risk profile to better intercept cybercriminals.

Monday, April 22, 2019

SIFT Score - the West's Answer to China's Social Credit Rating. Thanks, Big Brother

Data on what you buy, how, and where is secretly fed into AI-powered verification services, according to the Wall Street Journal. These are supposed to help companies guard against credit-card and other forms of fraud.

More than 16,000 signals are analyzed by a service called Sift, which generates a "Sift score," used to flag devices, credit cards and accounts that a vendor may want to block based on a person or entity's overall "trustworthiness" score. From the Sift website: "Each time we get an event -- be it a page view or an API event -- we extract features related to those events and compute the Sift Score. These features are then weighed based on fraud we've seen both on your site and within our global network, and determine a user's Score. There are features that can negatively impact a Score as well as ones which have a positive impact."

The system is similar to a credit score except there's no way to find out your own Sift score. This sounds a lot like the data that China's social credit system, in part, uses. In the PRC, a person's social score can vary depending on their behavior. The exact methodology is a secret — but examples of infractions include bad driving, smoking in non-smoking zones, buying too many video games and posting fake news online. While Edward Snowden certainly demonstrated the global extent of the US surveillance state, corporate entities have not implemented anything on the level of the Chinese social scoring system. Yet.