Before the internet became widely available, and 3.5” floppy disks were popular, I used to be an avid purchaser of computer magazines. My choice of magazine was not determined by the quality of the articles, but by the contents of the floppy disk that came with it. This was the main way in which I could get access to new software.
With the proliferation of the Internet I can get access to new software whenever I like, simply by downloading it, or increasingly accessing the software directly via a web browser.
Shrink Wrapped Software
Just as the way in which software is distributed has changed, so has the way in which we write software. In the past we used to write software, test it, and release it – stick it on a floppy disk and sent it to the customer. This used to work fine for software which the consumer would run themselves, now however things have changed. This was fine for the consumer software, but for services or long running software developers needed to provide more.
Services that monitor Services
For long running software (services), like an email server, a developer has to provide information about what the product is doing. Traditionally this was done via a set of log files. This provided data on what the server was doing, but unless the customer was watching the log files it wouldn’t provide any way for the customer to be notified of an issue. Typically the first thing the customer would notice was that the email server wasn’t available. To solve this a set of third party software developers created tools which would automatically read the log files and generate alerts. At this point we have a customer who purchases the email server from company A, and then purchases some monitoring software from company B, to ensure that the software from company A is running correctly.
Developers Running their Own Software: The Birth of DevOps
With the shift to provide software which can be accessed on the web via a web browser the original creators of the software encountered the monitoring problems head on. Now developers where creating their application in their chosen programming language, c / c# / java etc, and then using a third party product to try to implement monitoring needed to run their software successfully. This is:
1. Costly; as you need to employ two people, one versed in the product and the other versed in the monitoring system, or one very smart person.
2. Not very successfully; as it is hard to marry up the interfaces between the systems, is the product generating enough log information, and is the monitoring system capturing it correctly.
This approach also effectively splits a product into two pieces, one piece captures the functional requirements, what the product should do, and is implemented by the developer, the other piece is the non-functional requirements, how the produce performs this functionality and is captured by a monitoring system by an operational team.
CloudWave - The DevOps Movement & Technology Implications
The DevOps (Developer + Operations) movement really started when developers started hosting and supporting their own software. Suddenly the pains experienced by the operational team where experienced directly by the developers.
Currently there are few tools available to support developers in creating both the core functionality of the product and the necessary monitoring infrastructure. Most systems for monitoring software are separate from the core software the developer has created. In order to have automated responses to changes detected by the monitoring system developers have needed to create bash and shell scripts and custom configuration files for the monitoring system.
This separation between the core application and set of configuration files and bash scripts limits the functionality of the monitoring system and prevents the application from responding to changes in its behaviour. CloudWave is going to address this.
CloudWave is aiming to provide a developer will all the tools they need to tightly integrate the monitoring infrastructure with the core application’s functionality. This tight integration then allows the developer to write applications which can adapt to changes in their environment. This provides four main advantages to the developer:
1. They no longer need to split their application into function and non-functional requirements; they can capture their non-functional requirements directly in their application’s code.
2. They can have code in their application which listen for and responds to notifications about the application’s behaviour. E.g. an SLA is about to be broken, the application can be notified and can adapt to ensure the SLA continues to be met.
3. It’s cheaper; there is no need to employee two people with two different skill sets.
4. It’s more reliable; the integration between the monitoring and the application is closer, and better defined than before.
This approach of unifying the application’s logic with a monitoring and triggering system presents some really exciting new possibilities. Will developers only use it to ensure SLAs are met, or can they use this same approach and technology to change business logic? – Consider the following scenario:
An online store run’s an A | B test; this is where the store has two web pages for the same product, the two pages have a different visual design, will one design sell more products than the other?
It turns out that customers in the US buy more using page A, and customers in the EU using page B.
The monitoring of the performance of the pages, and the selection of the best page to display, along with where to display it could all be implemented using the technologies we’re considering in CloudWave. Not only is CloudWave allowing the developer to create an application which automatically responds to the non-functional requirements, but it’s also allowing the developer to create an application which can automatically responds to changes in business logic too.
The implications are quite exciting and I’m looking forward to exploring them further as we progress with the project.