Commoditization of services might sound like a new trend. After all, all those Ubers and Airbnbs seem like fresh ideas, appearing just recently. We still call them “market disruptors”, right? But probably only those working with IT every day realize that the same trend, with aim for the smallest item performing an unique function, has been present in the IT industry for decades. In this article, I will present the trend as I see it and share my observations. For the casual reader – I promise to keep it as user friendly as possible!

With a certain level of simplification, I believe I can make a statement commoditization in IT started when the “distributed environment”, as the IBM people call it, took off with the approach of creating simple, stand alone computing platforms we refer to now as servers. The attempt to offer services installed on multiple servers, dispersed globally for optimal customer experience and resilience is still popular even with the obvious faults of this approach. For example, servers had a set of resources at their disposal, as the vendor designed them to be applicable for majority of use cases. That usually doesn’t match with what the hosted application really needed. These servers were not as resilient as the mainframe, but at a reasonable price, they allow much faster concept-to-software-service process. These inefficiencies naturally lead to costs that normally wouldn’t have to be implied, but also to indirect outcomes such as developers, especially in the high performance computing sector, having to perform low level optimizations of their software.

The second level of infrastructure commoditization I call the concept of defining key resource areas commonly used by software and creating platforms which allow administrators looking after resource pools. Typically, this is referred to as server virtualization. Imagine your developers work on two projects where the product in each case is an application, but with an entirely different behavior. An application doesn’t store much data, but performs difficult computations – no problem, the admin can spawn (yes, spawn!) a new server with some memory but multiple processors (as long as the app can make use of them). The other application does little math but needs to load huge objects into memory? The admin could now use the memory resources saved on the first project and allocate them here. Running out of resources in the pools? Just insert (hot-swap) another blade of CPUs, RAM or expand the SAN for data storage.

Around the same time, the CPU itself has undergone even further commoditization, we could call it level 3. Multiple solutions have been implemented on processors to improve their efficiency without increasing the clock speed or when further miniaturization of the manufacturing process was not yet achievable. One of the key optimization areas was based on the observation that a lot of potential is lost during the processor cycle itself. As a result, various instructions have been implemented, like SMT for physical cores of a CPU, HT for logical cores, but also VT instructions for the actual virtualization. As a result, we get logical cores in physical cores forming a modern CPU.

Level 4 is a more recent tale, when the definition of SDNs, or “software defined networks” appeared. In simple words, the virtualized stack of standard commodities could be expanded to cover the network as well, which in the internet era is a commodity in itself. The idea is to present a complete set of infrastructure items to meet the development design, which naturally speeds up the whole infrastructural design phase of a project and most importantly, offers a great deal of simplicity to everyone involved in the development. Deployment of new applications of services has never been this easy.

With software defined networking and pools of virtual resources at hand, it is not hard to notice this approach still borrows from the good old “application hosted on a server” concept from what I called “level 1” era. This means each virtual server comes with a lot of overhead, including the operating system and its “wealth” of features. An OS is usually prepared with the idea to just work for as many setups as possible, which then by definition makes it “not the most optimal piece”. Level 5 of commoditization comes to the rescue – containers, among them the most popular container technology – Docker. A single server can run multiple containers, which on the outside look like virtual machines themselves, however they contain only the bare minimum of the OS that the applications hosted in them need to operate. Brilliant, isn’t it? Add to that software to operate multiple containers such as Kubernetes.

So what’s next? Commoditized infrastructure pieces, commoditized operating system allow granular choice of elements to operate applications or services. It doesn’t look like much more can be achieved here. Who can further push the boundaries? The answer is based on the fact that, just like the OS, almost all concepts in popular technology are presented in a way to allow 95% of the use cases to succeed. But what happens, when you’re in that remaining 5%? And who usually ends up there? For the last decade, in my opinion, two areas fall in the 5%: high frequency trading and high performance gaming. Add to them extreme data center owners (aka hyperscale), such as Google or Facebook and you’ll notice they all already have the solution. Software will only be software – it gets loaded into standard memory, operations executed using a standard core. But if the same operations applied to a virtual container happen million times per day on a standard piece of commodity, why not move them to hardware?

FPGA, or field programmable gateway arrays are becoming popular, because they allow just that. The software developer’s work doesn’t have to end as soon as the code is sent to the compiler. The compiler allows the code to be loaded and executed by a typical CPU, which by design is universal and can do a lot of things. Most software however perform a set of standard operations and then something unique to their nature, which gets repeated millions of times. FPGAs allow commoditization of that unique, repeatable activity, which when compiled onto the programmable board, can give it a performance boost of 10-100x at only 20x power increase. There is an immense cost reduction, instead of simply scaling the pool of CPU/MEM resources.

Does it mean application programming has to again become very low-level, make a huge turn around from the era of rapid development? Not at all, there are tools available that make it very easy, for example by utilizing Verilog. Certain standard blocks (DSPs) are pre-installed on the boards, to allow the developers to actually implement only the higher level logic. At the same time, programmable boards are available at reasonable prices and they can be re-programmed, unlike the super-expensive boards produced on demand for HFT trading companies.

FPGAs are the next big thing – if the above combination of breakthroughs did not convince you, take into consideration the battle to purchase Lattice, a vendor of FPGAs only, for $1.3 billion. Consider the rise of Xilinx, specializing in programmable SoCs. Last but not least, Intel acquired Altera about a year ago with the goal to join precisely this promising business.

So what the CIO/CTO can do? While it might not be very easy to find Verilog/VHDL specialists who also are capable developers understanding time-to-market and quality programming concepts, it was a matter of time until other vendors try to fill in the gap. Amazon is already offering EC2 instances with programmable hardware. OVH also offers a similar solution called RunAbove, which at the time of writing this article got completely sold out. Last but not least, there’s also Reconfigure.io, offering a Google Go language compiler optimizing the code developers wrote to be directly installed onto programmable boards.

What’s next? Maybe containers hosting micro services could be moved to FPGAs?

One Reply to “Enter the commoditized IT, pushing the boundaries”

Comments are closed.