Author Archives: toddengen

About toddengen

I am a graduate of The Ohio State University and an employee of IBM. All posts are my own.

Moneyball, Sabermetrics, and Fit For Purpose

Well, baseball spring training is starting. My Minnesota Twins have just started with pitchers and catchers reporting last week. I look forward to the arrival of baseball each spring. It means the beginning of spring, of course, which will be a welcome respite from this winter which has been rather harsh for Ohio. It also means that the baseball season is around the corner. I follow several Minnesota Twins related blogs, among them Twinsdaily, Twinkietown, and Aaron Gleeman. Now that spring training is upon us, there is lots of content starting to come out. Baseball bloggers have gotten very keen on sabermetrics as described in the book, Moneyball. The tenets of sabremetrics remind me a bit of Fit For Purpose for computer systems.

Moneyball describes how the Oakland Athletics began the widespread use of sabermetrics to build a winning baseball team. Sabermetrics uses advanced baseball statistics to help identify undervalued players. As a small market team, the Oakland A’s, and my Minnesota Twins, usually can’t afford to compete for the big name players against the likes of the New York Yankees, Boston Red Sox, and Los Angeles Dodgers. That means they have to find value in players that the other teams might overlook. Sabermetricians look to find under appreciated players whose actual impact might be overlooked in the media hype and traditional scouting. They use advanced baseball statistics such as on base percentage (OBP), slugging percentage, on-base percentage + slugging percentage (OPS), fielding independent pitching (FIP), and many more. Rather than depending on the hype and the eyes of scouts, they are using measurements that tease actual value out a players actual performance.

How does this relate to Fit For Purpose? Well, as I was thinking about Moneyball, it occurred to me that the sabermetricians are looking for actual performance rather than hype and that is similar to the way that a fit for purpose methodology looks to match a computer system’s characteristics to the workload, and not just depend on the hype of the day. In the computer world, we hear all about technologies — 22nm and 14 nm semiconductors geometry, clock speeds, SPECint ratings, and the like. But how do those characteristics help with real, live workloads? Maybe they matter, maybe they don’t. What really matters is how well a system can perform actual workloads and how well it meets user non-functional requirements. To sabermetricians, it doesn’t matter that Derek Jeter has great press and a reputation as a great hitter and great fielder. The proof is in the performance. There are probably elements of truth to Jeter’s reputation, and it may have applied more when he was younger than when he was approaching the end of his career. But, sabermetrics looks to find the reality of his performance, and especially his recent performance to determine his current value. Similarly,the specs that a processor has only matter a little.  What matters more is how the entire architecture meets the requirements.Here are some exmaples old baseball stats, and a better stat:

      ERA (Earned Run Average) — FIP (Fielding Independent Pitching)
      BA (Batting Average) — OBP (On-base Percentage)
      RBI (Runs Batted In) — Slugging Percentage
    Fielding Percentage (errors/chances) — UZR (Ultimate Zone Rating)

What are some of the metrics available in the computer world? Well, there are lots of benchmarks out there, like the various SPEC benchmarks, Transaction Processing Performance Council (TPC), SAPS SD, and some others. And, of course there are simple metrics like clock speed, cache size, memory size, etc. All of these provide some information, but can be hard to extrapolate to real-world workloads. TPC-C was intended to look like a real-world workload, but it was developed 20-ish years ago, and the state of computing has improved to the point that TPC-C has become less dependable. Also, vendors have become fairly adept at tuning to the benchmark, so that is also affecting the applicability of benchmarks. If you want to know what a simple workload, like a computational workload with little I/O, would behave like on various platforms, then the SPEC benchmarks can be useful. But, most workloads don’t look like that. The simple benchmarks pretty much mirror the clock speeds. So, are there any better examples out there? IBM’s System z performance guys publish a document, the Large Systems Performance Reference (LSPR),  that attempts to do a better job, but it is unique to System z. There really is not anything like it in the open systems world. IBM has also developed an idea called workload factors that tries to account for variations in processor architectures and relate it back to System z. Workload factors have been developed mostly by experience with comparing the performance of various types of workloads in actual performance testing at IBM. Depending on the type of workload, the relative capacity may be higher or lower than the clock speed or simple benchmarks like SPEC.

Even without the level of analysis that IBM has done in the LSPR, you can see variations even with the fairly simple benchmarks. The more parallel, or specialized the benchmark, the more it reflects the basic characteristics of the processor itself. So, SPECInt measures integer arithmetic. SPECfp measure floating point. Since pretty much all processors have built-in integer and floating point unit nowadays, SPECInt and SPECfp mostly mirror clock speed. SAPS measures the SAP transaction rates, and mostly measures the application tier, which is the most processor bound and most paralellizable. And, TPC-C is probably the closest common measurement to a transactional workload, but its age makes it less predictive than it would have been 15 or 20 years ago.

There have been some interesting presentations and work published by the Computer Measurement Group. Among them are: Roger Rogers and Joe Temple, “Relative Capacity and Fit for Purpose Platform Selection”, CMG Journal of Computer Management, no 123, March 2009, Rick Lebsack and Joe Temple, “Fit for Purpose Platform Selection, a Workload View” CMG Journal of Computer Management, no 129, August 2011, and presentations by Joe Temple at CMG’s conferences, most recently, “Common Metrics don’t Work”
Session 543.

A Cloud Computing Primer, Part 3 — Public, private and hybrid clouds

 

Clouds at sunset
My last couple posts have discussed the types of service of clouds and the characteristics of clouds. Today, I’d like to discuss the general deployment of clouds, i.e. public, private and hybrid clouds. I’ve touched on this in prior posts to a degree.

Public clouds are probably what most people think of when they talk about cloud computing. A public cloud is a cloud that is accessible publicly … duh …, and that is generally not restricted in its access. Anyone\can create something in a public cloud. It is generally accessed over a non-secure internet access, i.e. not secure http. A public cloud is hosted by some organization and made available to most anyone with internet access. There is usually a charge for access to it. The most common examples would be Google,  Amazon and IBM SoftLayer cloud services.

Another common public cloud type is all of the Software as a Service (SaaS) offerings. In my last post I mentioned some of these: PhotoShop CC, Fitbit, Google Maps and cellphone and gaming apps. A lot of the cellphone apps are public, Software as a Service cloud applications.

The next type of cloud is a private cloud. A private cloud is usually a cloud that serves a business. A private cloud can only be accessed by users authorized by the owner of the private cloud. The private cloud could be totally contained on the business’ premise or it could be a publicly available application via the internet. A private, on-premise cloud would usually be managed by the company and could be any of Iaas, Paas, or SaaS. The access to a private cloud is through an orgainzation’s network, and may have external internet access via a company’s VPN network.

A private cloud can also be off-premise. Most of the IaaS vendors offer a private cloud offering in the public cloud. In fact, once you have created some server capacity in the public cloud, you now have a private, off-premise cloud, because you would be the only one authorized to use it. Many of the business oriented public SaaS cloud offerings, such as Salesforce.com, or IBM’s Blue Mix are off-premise, private clouds.

A hybrid cloud combines elements of these on-premise, off-premise and private-public clouds. A couple examples might better explain. Suppose you are a retail store, and you have a cash register in your store that connects to your back office applications running on a server back at the home office. You might have a hybrid cloud in the back-office where your own, private, on-premise servers do some of the work, but then go to an off-premise service for some of the function. An example that jumps to mind is credit card validation. Your systems do not directly do the credit card validation and payment. You go off to a Visa, MasterCard, or American Express card payment service somewhere. Another use case for hybrid cloud is to have some of your infrastructure living on-premise and servicing most of your normal work. However, in case you have a spike in demand, you might wish to go to an off-premise cloud to take up the slack of the demand that your infrastructure is not built to handle.

You can mix and match these cloud types to come up with very complicated arrangements of hybrid clouds, or you might have access to islands of private clouds. It all depends on what you are trying to do.

Breaking News: New IBM Power 8 Announcements

At Enterprise2014 this week, IBM has announced new Power8 S824L, E870 and E880 systems.

New 4U Linux only Power8 Server

New 4U Linux only Power8 Server



The S824L is the Linux version of the S824. The S824 is a 2 socket, 4u rack mounted system. This announcement fulfills a statement of direction from the earlier scale-out Power8 announcements earlier in the year. Some highlights of the system are:

  • Up to 24 Power8 cores
  • 2 NViDIA GPU accelerators
  • Up to 1 TB of memory
  • 10 PCIe Gen3 slots


There are 2 processor options. The system can have either two 10-core 3.42 GHz chips or two 12-core 3.02 GHz chips. There are 16 DDR3 memory slots that can have 16GB, 32 GB, or 64 GB DIMMS. The system also includes twelve drive bays. The system runs little-Endian Ubuntu Linux. There is currently no support for PowerVM or PowerKVM. It is bare-metal only

Details on the S824L are available at http://www-03.ibm.com/systems/power/hardware/s824l/index.html

New IBM PowerE870

New IBM PowerE870



The bigger news is the Enterprise E870 and E880. These systems will be the next generation following the 770 and 780. They offer upgrade paths from the current 770s and 780s. They will be generally available (GA) November 18. They will offer 90-day temporary Elastic Capacity on Demand (CoD) processor and memory. It will also enable Enterprise Pools that allow a group of systems to share processor and memory entitlement between them. This allows a group of systems to act as a cloud and the processor and memory capacity can float between the systems as needed for workload balancing or non-disruptive system maintenance. Some of the specs on the E870 are:

    Up to 64 Power8 processor cores at 4.02 GHz orUp to 80 Power8 processor cores at 4.19 GHzUp to 4 TB of 16oo MHz DDR memoryUp to 16 PCIe Gen3 slots

  • Greater redundancy with redundant system master clock and redundant system master Flexible Service Processor (FSP)
  • Optional 19-inch PCIe Gen3 4U I/O Expansion Drawer, each providing 12 PCIe slots
  • EXP24S SFF Gen2-bay Drawer with twenty-four 2.5-inch form-factor (SFF) SAS bays

New IBM Power E880

New IBM Power E880



The E880 is similar to the E870, but bigger and faster. Some of the key specs on the E880 are:

 

  • Up to one hundred twenty-eight 4.35 GHz POWER8™ processor cores (up to 64 cores in 2014)
  • Up to 16 TB of 1600 MHz DDR3 CDIMM memory (up to 8 TB in 2014)

The E890 and E880 have the same capabilities for Capacity on Demand features, including Elastic Capacity on Demand and Enterprise Pools.

The Power8 processors offer significant performance gains compared to IBM’s Power 7 and to Intel processors. The Power8 chips offer approximately 4 times the I/O and memory bandwidth compared to Power 7 and Intel SandyBridge. IBM states that these machines are designed for Big Data. That is the way it is going to deliver it, by providing unparalleled I/O and memory access to feed these fast processors to rip through Big Data in a hurry.

More information on the E870 and E880 are available at:

http://www-03.ibm.com/systems/power/hardware/e870/index.html

http://www-03.ibm.com/systems/power/hardware/e880/index.html

 

A Cloud Computing Primer, Part 2 — classes of clouds

 

Clouds at sunset

Last post I discussed the essential characteristics of cloud computing. This time, let’s discuss the types of cloud. In general, there are 3 types of cloud services:

  • Infrastructure as a Service  (or IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)

Infrastructure as a Sevice is a compute service that just provides the basic infrastructure. You generally get compute, network and storage resources in an IAAS service. Usually you will get an operating system, usually Linux. This is the basic offering of a cloud provider. Some examples are Amazon EC2, Microsoft Azure, Dropbox, and IBM SoftLayer. When you use these clouds, you are generally charged by the amount of compute and storage resources that you configure and use. There are always usage charges, and sometimes initial setup charges. For the network components, there are various charging models. Some providers have a charge per GB of data traffic. SoftLayer does not charge for traffic inside its network. When you set up an IaaS service you will select the size of the compute power, usually in terms of the number of CPUs, the amount of memory and the amount of disk configured, and an operating system you want installed. When you push the button to create the infrastructure, it will automatically create the server images you requested. Most IaaS providers will create only virtualized servers, i.e., they are Virtual Machines (VMs) in a server somewhere in the cloud. In the simplest versions, the cloud provider puts the VMs anywhere in its server farm. You don’t know anything about where it is placed.

IBM’s SoftLayer offers an additional option. On SoftLayer, you can also request a bare-metal, physical server. This is not virtualized, but is still provisioned “in the cloud.” The advantage to bare metal is that you have performance advantages by not having a virtualizer or hypervisor in the middle. You also have your own, non-shared server which can help assure that you get a reliable level of performance and service.  I said before that virtualization is critical to cloud. This is a bit of a violation of that principal, but it doesn’t violate the essential characteristics. It is still provisioned rapidly, it’s accessible via the internet, you pay as you go, etc.

Platform as a Service, (PaaS), is the next layer up from IaaS. Platform as a Service includes some kind of middleware on top of the OS. Middleware examples include databases, messaging services, web-application server, or the like. I haven’t seen a lot of examples of this in the public cloud. Things like Dropbox might be considered as Platform as a Service as it is offering storage, but also sharing functionality on top of the plain vanilla storage.

The third type of service is Software as a Service (SaaS). SaaS is a complete offering of an application or software service. Some of the more widely known examples would be Salesforce.com, PhotoShop CC, Fitbit, Google Maps. A lot of the cellphone apps are Software as a Service, or have parts of them being software as a service. There are business apps in the cloud, such as Salesforce. SAP is also starting to offer its software in the cloud. IBM is offering a product called Blue Mix, which is a combination of a number of their development tools, web tools, and other software products that can be used to quickly create new applications. There are also a number of gaming apps in the cloud. Software as a Service is frequently an end-user offering, where-as IaaS and PaaS are commonly more of an IT offering.

Is cloud computing really new? What will turn out to be its most useful application? Where does traditional enterprise computing fit? I hope to explore some of these questions in future posts.

Any comments or suggestions on where to take this discussion are welcome.

A Cloud Computing Primer — Cumulo-nimbus or stratus?

In prior posts, I have discussed fit for purpose and selecting the right IT tool for the job depending on the non-functional requirements that an infrastructure needs to provide. Nowadays, cloud computing is all the rage in the IT world. In this post and following posts, I plan to discuss how Cloud fits into the range of infrastructure options.

Clouds at sunset

To start, what is Cloud Computing? The National Institute of Standards and Technology (NIST) defines it as: “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” NIST also states that cloud computing has five essential characteristics:

  • on-demand self-service
  • broad network access
  • resource pooling
  • rapid elasticity or expansion
  • measured service

More details of the NIST standard is available at http://www.nist.gov/itl/csd/cloud-102511.cfm

There are corollaries to the definition and characteristics. Automation is clearly required to provide on-demand self-service and rapid elasticity. It can’t be dependent on little IT elves waiting for something to do. Pooling would also required automation and also a pool of resources to draw from when requested. Measured service requires tooling to collect the usage data and probably a method of charging for the service. In the case of public clouds, you provide the provider with a credit card, PO, or other payment method right up front.

Virtualization and standardization are critical to cloud computing. Standardization makes it possible to do the automation. Standardization also limits the choices, making self-service easier, and also reduces the number of choices an end-user would be required to make. Virtualization enables standardization by abstracting the hardware details to make it easier to standardize, and also enables automation.

Cloud is the big buzz in IT these days and more and more providers of cloud services are cropping up. There are the default names such as Amazon and Microsoft Azure. My employer, IBM, is offering SoftLayer. And, many enterprises already have elements of infrastructure that might be considered a private cloud, meeting some or all of the characteristics as defined by the NIST.

Is cloud computing really new? What will turn out to be its most useful application? Where does traditional enterprise computing fit? I hope to explore some of these questions in future posts.

Any comments or suggestions on where to take this discussion are welcome.

IBM and Lenovo Pass US Government Checks — What’s Next For System x?

Late last week, the US government finished their review of IBM’s proposed sale of System x and some software elements to Lenovo. (See http://online.wsj.com/articles/ibm-server-sale-to-lenovo-passes-u-s-test-1408135593) What does this mean for System x, IBM and the x86 server industry? Here are a few of my thoughts. Just a reminder, though I am an IBM employee, these are my own thoughts and are not due to any specific inside information, nor am I speaking for IBM.

What does this mean for IBM? Is IBM no longer in the x86 business? Well, yes and no. IBM will no longer be selling x86 servers directly to customers for their own datacenters. However, IBM does have its cloud services function, Softlayer, that sells x86 service in a cloud model. One unique feature of IBM’s Softlayer offering is that it offers both virtualized servers and bare metal capability. So, instead of selling real, flesh and blood servers to customers, IBM could sell them a bare-metal server in the cloud in addition to cloud based virtual servers. Plus, IBM touts its large, fast network capabilities in Softlayer, which could amount to selling the fastest access to an x86 server to a customer, even compared to a customer’s own datacenter and network. I think we’ll see IBM pushing even harder on Softlayer now and may begin to introduce it in situations that would normally be considered to be a pure hardware sale situation.

IBM also will still be selling its traditional outsourcing offerings which include all varieties of servers. These offerings include hosting and managing IBM servers, but also non-IBM equipment such as Oracle/Sun, HP, Dell, Cisco, etc., usually because a customer has those servers in its inventory. So, IBM can still sell x86 servers as part of outsourcing deals from any vendor, including Lenovo, HP, and Softlayer.

What about the rest of the x86 industry? An associate made a comment to me along the lines of, “why should I care about System x if IBM is selling it?” Well, I think in this case, selling System x to Lenovo could put a lot of pressure on the x86 server industry. Lenovo has been very successful with Windows workstations, especially laptops, since they acquired IBM’s workstation portfolio 10 years. They are competitive or better on price and they’ve done some nice innovation. If they take the same approach to the server environment, it could be bad news to the traditional US based x86 server vendors, HP and Dell. Lenovo would likely be very competitive in this space to try to win market share, and probably has pricing power with Intel based on its workstation and laptop business that IBM no longer had. So, combine a company willing to compete on price, and a potentially less expensive supply chain, and I think other major x86 manufacturers should be very concerned.

What do you think will happen? Are there potential political consequences of a non-US computer manufacturer making the big time in the US market? feel free to comment.

 


Todd Engen

Moore’s Law of Aircraft Technology

I attended the Thunder Over Michigan airshow last weekend. What a great time seeing old warbirds, newer warbirds plus current aiforce inventory by way of The USAF Thunderbirds. Here are a few pictures:

P-38 -- RUff Stuff

P-38 –Lightning —  “RUff Stuff”

P-51 Mustang

P-51 Mustang — “Petie 2nd”

P-73 King Cobra

P-63 Kingcobra

PT13 Stearman, primary trainer

PT13 Stearman, primary trainer

F86 Sabre

F86 Sabre

USAF Thunderbirds

USAF Thunderbirds


Watching these time machines in action in Ypsilanti, MI got me thinking about the evolution of technology, in general, and how a mature technology, like aircraft technology compares to semiconductor technology. I’m sure you’ve heard comments like, if the technology of a car was like semiconductor technology, a car would cost $x (where x is a relatively small number, like $200) and go y MPH (where y is a relatively large speed, like 600 MPH). So, I was curious what the Moore’s law curve might look like if you looked back at aircraft technology.

Moore’s Law states that the number of transistors that can be put on a chip doubles about every 2 years. Since the dimensions in the chip are going down, the corollary to that is that the speed of the chip goes up on a similar exponential curve. Using speed as an analog for the improvement resulting from Moore’s Law, I thought it would be interesting to compare the evolution of chip speed versus the evolution of aircraft speed. I decided to use Intel processors, generally referred to as X86 processors, as a simple example of the speed-up of Moore’s Law. I found a table of all the Intel chips on Wikipedia at http://en.wikipedia.org/wiki/List_of_Intel_microprocessors, and a list of air speed records at http://en.wikipedia.org/wiki/Flight_airspeed_record

Interestingly, the first airplane, the Wright Flyer, made a whopping 6.82 MPH, barely faster than a walking pace.

I graphed out the aircraft speeds and processor speeds to see how similar they were:

AIrcraft speed records

AIrcraft speed records

X86 Clock Speeds

X86 Clock Speeds


The airspeed looks fairly linear early in its curve and then has a late sharp swing up. The x86 curve seems flat at the beginning with a sharp upward curve towards the end. In both cases, it looks like it flattens out in the last few samples. In the case of airspeed, there has not been a new airspeed record since 1976. A better view would be to look at the data with the speed as the logarithm of speed. That will allow us to see the low-end better and a straight line will show us an exponential (i.e. doubling every x years) relationship.

logarithmic Aircraft speed records

Logrithmic Aircraft speed records

logarithmic X86 clock speed

Logrithimic X86 clock speed


On the log scales, the x86 curve looks like a straight line would fit well, but the last few have flattened out. The aircraft speed record flattens out in the 1930s, then jumps again when jets came into being. It’s not a great linear fit on the log scale.

If you look at the doubling time, for x86, between 1971, when the 4004 came out at 710 KHz, and 2010 when 3.6 GHz Clarkbridge came out, the clock speed grew 4096 times, or doubled 12 times over 39 years. That works out to doubling every 3.3 years or so. The aircraft didn’t do so well, they went from 6.82 MPH to 469.22 MPH at the best piston engined result in 1939, for a 68.8 times greater speed  over a similar 36 years. So, aircraft doubled speed 6 times in 36 years, or doubled every 6 years. There was a short relative jump when jets came out, but it flattened out by the early 60s.

Interestingly, if you look at the data, you can see both curves having some flat spots when they ran up against technical problems that needed to be solved. In the case of aircraft, the first aircraft were bi-planes or even tri-planes. The planes needed 2 or more wings in order to get enough lift to get in the air. But, two wings create a lot of drag and limit the maximum speed. In the 1920s, single wing planes became practical and the speeds jumped. Then, piston engines became the bottleneck, and speeds didn’t change much until the jet era. In reality, no propeller driven plane would be able to get close to the speed of sound because the tips of the propellers reached the speed of sound well before the aircraft did.As the propeller tips approach the speed of sound it creates turbulence. The turbulence robs the propeller of the ability to push the plane through the air. Even slower planes like trainers, especially the AT-6 Texan, can get their propellers near the sound barrier. Pilots jokingly call this a “great way to turn av-gas into noise.” Jets solved that problem, but then friction with the air and drag become the limiting factor. The fastest airplane ever was the SR71 Blackbird, whose skin was made of titanium that was light, but also heat-resistant. The fuel was stored just under the skin to dissipate the heat from the friction. Even in the thin air at 70,000 feet, the friction caused tremendous heat in the skin of the plane as it flew around Mach 3 and 2000 MPH.

Similarly, in semiconductors, it seems that x86s had a relatively flat spot in the late-90s when major speed gains weren’t made until 2000. Now, they are facing the limits of physics. The gate dimension is approaching the size of a single atom. This drives up electrical the resistance, especially at high clock speeds. Higher resistances causes greater heat generation in the chip. And that is reaching the maximum ability to remove the heat from the chip. You can even see the peak clock speeds decreasing slightly from it peak.

While aircraft technology was relatively new, from 1903 until the early 30s, aircraft speed went up very quickly, and given the technical challenges of engine technology, material science, and understanding of aerodynamics, it is pretty amazing how quickly they improved. Semi-conductors have seen even faster leaps in performance due to Moore’s Law and have to overcome technical challenges as the dimensions have shrunk. Now, as semiconductor technology is starting to approach the limitations of physics, it will be interesting to see how electronic systems evolve with a smaller level of increase in speed.

A Day at the Races: Right Vehicles For the Job

My last two posts have been about Fit For Purpose and the Right Tool For the Job. My wife and I spent this past weekend at the Mid-Ohio race course attending the Honda 200 Indy Car and Pirelli World Challenge races. While there, I saw a large number of vehicles with various and sundry purposes and characteristics. Of course, there were the race cars and there are lots of spectator vehicles in the parking lots. There were also a large number of utility vehicles with all kinds of purposes.My wife thought I was nuts taking pictures of some of these vehicles. I had a hard time answering her query as to why I was taking a picture of an RV, for instance. Imagine if I had taken a picture of the Porta-potty service truck! 🙂

Which of these vehicles would you want for carrying a large load? Which would you use to campout at the racetrack? Which one might make you some money on race day? Which would be turn heads in the infield?

Race-ready Dodge Viper

Race-ready Dodge Viper

2010-ish Ford GT40

2010-ish Ford GT40

Law Enforcement mobile command center

Law Enforcement mobile command center

Tractor Trailer rig

Vinage Mini

Vintage Mini

Recreational Vehicle

Recreational Vehicle

Honda Sport Bike

Honda Sport Bike

Holmatro Safety truck

Holmatro Safety truck

Coffee truck

Coffee truck

Bikes and a Honda Civic

Bikes and a Honda Civic

It is pretty clear from most of these what job each is supposed to perform. A lot of the reason in these cases is functional. For example, the Holmatro Safety Truck is equipped with a bunch of safety equipment and race car towing equipment. But, at the same time, there are a number of non-functional requirements, such as vehicle weight, vehicle towing capacity, vehicle carrying capacity, etc. that figure in to how well it will work.

 

Fit For Purpose, Part 2 — What are the characteristics of the right tool for the job?

In my prior post, I talked about choosing the right tool for the job. Nowadays, computer systems have become more generalized. The specialized systems of 20 or 30 years ago have been swallowed up by general purpose systems. Remember Wang wordprocessing systems ?

Wang OIS terminal

Wang OIS terminal

Or IBM’s 5520 Administrative System

IBM 5520 Administrative System

IBM 5520 Administrative System

(http://www-03.ibm.com/ibm/history/exhibits/pc/pc_5.html)

Or a graphics workstation:

IBM  Power 275 Workstation -- Cad/Cam workstation

IBM Power 275 Workstation — Cad/Cam workstation

A lot of the functions of those specialized systems can now be done on a PC. Enterprise systems have become fairly general, becoming “servers”, whether they are based on mainframe, Unix/Risc, x86, or even Atom technologies.

So, since IT systems are general purpose, how does one figure out what makes one system better than another? What are the characteristics to look for?

Well, the answer probably isn’t really a technology question, per se. Sure, there are specific elements of a given technology that play a part, but usually, it depends on a specific enterprise’s situation that will determine it. IBM views these Fit For Purpose characteristics to come in three broad categories: Non-functional requirements, local factors, and workload fit. These characteristics are not functional, but are the result of the state of a given environment. The solution that best matches these characteristics in a given solution space and environment is what would be the best choice. Maybe I should say, the better choice. A best choice may not really exist.

The workload fit has significant technology specific elements to it. The characteristics of the workload might have stronger affinities to one platform or another. For example, a workload that has a high I/O content would, in general, perform better on a system with larger I/O capability. A workload with affinity to data on another system might perform better if it was co-located with that other data.

Non-functional requirements and local factors are less technology specific and more about the operational characteristics or the enterprise’s specific, unique requirements.. Some non-functional requirements would include: security, availability, performance, capacity, recoverability, etc. Local factors might include things like standards, strategic direction, skills of the organization, etc.

Another obvious element is whether or not one’s software product selection provides any choice. An open source solution can run anywhere. Some software vendor solutions run on only one platform. If the vendor doesn’t support another platform, then there is no choice to be made.

Fit For Purpose (Part )1 — What’s Fit For Purpose?

IBM has been talking about Fit For Purpose for the last few years. But what is Fit For Purpose? What does it mean?

My grandpa and my dad always used to tell me, use the right tool for the job. In a nutshell, that is what Fit for Purpose is. What happens if you use a tool that is sub-optimal for the job? As an example, a hammer is a very useful tool. Everyone knows the saying that if you have a hammer, everything looks like a nail. There are lots of things you could use a hammer for. but,your run of the mill carpenter’s hammer is best suited for driving nails and pulling them out. You could use it on a demolition project around the house, but it would be a lot easier to use a sledgehammer and a crow bar. You could use it for gardening. You could hammer weeds and use the claw as a cultivator. Hammering as a method of weed control is probably not well proven, and a hammer would get heavy pretty quickly as a weeding tool. Sometimes, we use a non-fit for purpose tool just because it is more convenient. My wife tends to run to the convenience side and it drives me nuts! She’ll grab any old thing that is close at hand for a job. Maybe a screw is loose, and she’s in the kitchen. She’ll grab a butter knife to try to tighten the screw. That kind of works, but not if you need to get it fairly tight or you don’t have a lot of space. Plus, you are likely to damage the knife so that it becomes less useful for its normal purpose.

Here are a couple humorous examples, courtesy of the USDA work safety web page at http://www.ars.usda.gov/Services/docs.htm?docid=14582&page=1

ToolSafetyscrewdriver2

What are the consequences of choosing a less good tool? Well, in the case of screwdrivers and hammers, there is the possibility of damaging the tool or the work. There is also a chance of injury. It might only be inefficient and a waste of time. In Information Technology, choosing a less than optimal tool could increase the cost, increase complexity, or increase the chance of failure or bugs.

IBM uses Fit For purpose in terms of IT systems and storage, but it applies almost anywhere. We naturally use a sense of Fit for Purpose in almost any decision we make. Often, we do not make conscious or rigorous fit for purpose decisions. However, there are many cases where we do try to evaluate the decision, but we don’t have a good framework for the decision-making process.

In future articles in this series, I will discuss in more detail considerations for Fit For Purpose for IT Systems and a rigorous decision-making process to help make decisions better and more thorough.