Category Archives: Uncategorized

Moore’s Law of Aircraft Technology

I attended the Thunder Over Michigan airshow last weekend. What a great time seeing old warbirds, newer warbirds plus current aiforce inventory by way of The USAF Thunderbirds. Here are a few pictures:

P-38 -- RUff Stuff

P-38 –Lightning —  “RUff Stuff”

P-51 Mustang

P-51 Mustang — “Petie 2nd”

P-73 King Cobra

P-63 Kingcobra

PT13 Stearman, primary trainer

PT13 Stearman, primary trainer

F86 Sabre

F86 Sabre

USAF Thunderbirds

USAF Thunderbirds


Watching these time machines in action in Ypsilanti, MI got me thinking about the evolution of technology, in general, and how a mature technology, like aircraft technology compares to semiconductor technology. I’m sure you’ve heard comments like, if the technology of a car was like semiconductor technology, a car would cost $x (where x is a relatively small number, like $200) and go y MPH (where y is a relatively large speed, like 600 MPH). So, I was curious what the Moore’s law curve might look like if you looked back at aircraft technology.

Moore’s Law states that the number of transistors that can be put on a chip doubles about every 2 years. Since the dimensions in the chip are going down, the corollary to that is that the speed of the chip goes up on a similar exponential curve. Using speed as an analog for the improvement resulting from Moore’s Law, I thought it would be interesting to compare the evolution of chip speed versus the evolution of aircraft speed. I decided to use Intel processors, generally referred to as X86 processors, as a simple example of the speed-up of Moore’s Law. I found a table of all the Intel chips on Wikipedia at http://en.wikipedia.org/wiki/List_of_Intel_microprocessors, and a list of air speed records at http://en.wikipedia.org/wiki/Flight_airspeed_record

Interestingly, the first airplane, the Wright Flyer, made a whopping 6.82 MPH, barely faster than a walking pace.

I graphed out the aircraft speeds and processor speeds to see how similar they were:

AIrcraft speed records

AIrcraft speed records

X86 Clock Speeds

X86 Clock Speeds


The airspeed looks fairly linear early in its curve and then has a late sharp swing up. The x86 curve seems flat at the beginning with a sharp upward curve towards the end. In both cases, it looks like it flattens out in the last few samples. In the case of airspeed, there has not been a new airspeed record since 1976. A better view would be to look at the data with the speed as the logarithm of speed. That will allow us to see the low-end better and a straight line will show us an exponential (i.e. doubling every x years) relationship.

logarithmic Aircraft speed records

Logrithmic Aircraft speed records

logarithmic X86 clock speed

Logrithimic X86 clock speed


On the log scales, the x86 curve looks like a straight line would fit well, but the last few have flattened out. The aircraft speed record flattens out in the 1930s, then jumps again when jets came into being. It’s not a great linear fit on the log scale.

If you look at the doubling time, for x86, between 1971, when the 4004 came out at 710 KHz, and 2010 when 3.6 GHz Clarkbridge came out, the clock speed grew 4096 times, or doubled 12 times over 39 years. That works out to doubling every 3.3 years or so. The aircraft didn’t do so well, they went from 6.82 MPH to 469.22 MPH at the best piston engined result in 1939, for a 68.8 times greater speed  over a similar 36 years. So, aircraft doubled speed 6 times in 36 years, or doubled every 6 years. There was a short relative jump when jets came out, but it flattened out by the early 60s.

Interestingly, if you look at the data, you can see both curves having some flat spots when they ran up against technical problems that needed to be solved. In the case of aircraft, the first aircraft were bi-planes or even tri-planes. The planes needed 2 or more wings in order to get enough lift to get in the air. But, two wings create a lot of drag and limit the maximum speed. In the 1920s, single wing planes became practical and the speeds jumped. Then, piston engines became the bottleneck, and speeds didn’t change much until the jet era. In reality, no propeller driven plane would be able to get close to the speed of sound because the tips of the propellers reached the speed of sound well before the aircraft did.As the propeller tips approach the speed of sound it creates turbulence. The turbulence robs the propeller of the ability to push the plane through the air. Even slower planes like trainers, especially the AT-6 Texan, can get their propellers near the sound barrier. Pilots jokingly call this a “great way to turn av-gas into noise.” Jets solved that problem, but then friction with the air and drag become the limiting factor. The fastest airplane ever was the SR71 Blackbird, whose skin was made of titanium that was light, but also heat-resistant. The fuel was stored just under the skin to dissipate the heat from the friction. Even in the thin air at 70,000 feet, the friction caused tremendous heat in the skin of the plane as it flew around Mach 3 and 2000 MPH.

Similarly, in semiconductors, it seems that x86s had a relatively flat spot in the late-90s when major speed gains weren’t made until 2000. Now, they are facing the limits of physics. The gate dimension is approaching the size of a single atom. This drives up electrical the resistance, especially at high clock speeds. Higher resistances causes greater heat generation in the chip. And that is reaching the maximum ability to remove the heat from the chip. You can even see the peak clock speeds decreasing slightly from it peak.

While aircraft technology was relatively new, from 1903 until the early 30s, aircraft speed went up very quickly, and given the technical challenges of engine technology, material science, and understanding of aerodynamics, it is pretty amazing how quickly they improved. Semi-conductors have seen even faster leaps in performance due to Moore’s Law and have to overcome technical challenges as the dimensions have shrunk. Now, as semiconductor technology is starting to approach the limitations of physics, it will be interesting to see how electronic systems evolve with a smaller level of increase in speed.

A Day at the Races: Right Vehicles For the Job

My last two posts have been about Fit For Purpose and the Right Tool For the Job. My wife and I spent this past weekend at the Mid-Ohio race course attending the Honda 200 Indy Car and Pirelli World Challenge races. While there, I saw a large number of vehicles with various and sundry purposes and characteristics. Of course, there were the race cars and there are lots of spectator vehicles in the parking lots. There were also a large number of utility vehicles with all kinds of purposes.My wife thought I was nuts taking pictures of some of these vehicles. I had a hard time answering her query as to why I was taking a picture of an RV, for instance. Imagine if I had taken a picture of the Porta-potty service truck! 🙂

Which of these vehicles would you want for carrying a large load? Which would you use to campout at the racetrack? Which one might make you some money on race day? Which would be turn heads in the infield?

Race-ready Dodge Viper

Race-ready Dodge Viper

2010-ish Ford GT40

2010-ish Ford GT40

Law Enforcement mobile command center

Law Enforcement mobile command center

Tractor Trailer rig

Vinage Mini

Vintage Mini

Recreational Vehicle

Recreational Vehicle

Honda Sport Bike

Honda Sport Bike

Holmatro Safety truck

Holmatro Safety truck

Coffee truck

Coffee truck

Bikes and a Honda Civic

Bikes and a Honda Civic

It is pretty clear from most of these what job each is supposed to perform. A lot of the reason in these cases is functional. For example, the Holmatro Safety Truck is equipped with a bunch of safety equipment and race car towing equipment. But, at the same time, there are a number of non-functional requirements, such as vehicle weight, vehicle towing capacity, vehicle carrying capacity, etc. that figure in to how well it will work.

 

Fit For Purpose, Part 2 — What are the characteristics of the right tool for the job?

In my prior post, I talked about choosing the right tool for the job. Nowadays, computer systems have become more generalized. The specialized systems of 20 or 30 years ago have been swallowed up by general purpose systems. Remember Wang wordprocessing systems ?

Wang OIS terminal

Wang OIS terminal

Or IBM’s 5520 Administrative System

IBM 5520 Administrative System

IBM 5520 Administrative System

(http://www-03.ibm.com/ibm/history/exhibits/pc/pc_5.html)

Or a graphics workstation:

IBM  Power 275 Workstation -- Cad/Cam workstation

IBM Power 275 Workstation — Cad/Cam workstation

A lot of the functions of those specialized systems can now be done on a PC. Enterprise systems have become fairly general, becoming “servers”, whether they are based on mainframe, Unix/Risc, x86, or even Atom technologies.

So, since IT systems are general purpose, how does one figure out what makes one system better than another? What are the characteristics to look for?

Well, the answer probably isn’t really a technology question, per se. Sure, there are specific elements of a given technology that play a part, but usually, it depends on a specific enterprise’s situation that will determine it. IBM views these Fit For Purpose characteristics to come in three broad categories: Non-functional requirements, local factors, and workload fit. These characteristics are not functional, but are the result of the state of a given environment. The solution that best matches these characteristics in a given solution space and environment is what would be the best choice. Maybe I should say, the better choice. A best choice may not really exist.

The workload fit has significant technology specific elements to it. The characteristics of the workload might have stronger affinities to one platform or another. For example, a workload that has a high I/O content would, in general, perform better on a system with larger I/O capability. A workload with affinity to data on another system might perform better if it was co-located with that other data.

Non-functional requirements and local factors are less technology specific and more about the operational characteristics or the enterprise’s specific, unique requirements.. Some non-functional requirements would include: security, availability, performance, capacity, recoverability, etc. Local factors might include things like standards, strategic direction, skills of the organization, etc.

Another obvious element is whether or not one’s software product selection provides any choice. An open source solution can run anywhere. Some software vendor solutions run on only one platform. If the vendor doesn’t support another platform, then there is no choice to be made.

Man or Machine? Science Fiction?

A co-worker at IBM forwarded this link to me: http://www.newyorker.com/online/blogs/currency/2014/04/cautiously-welcoming-our-new-computer-overlords.html This article discusses a talk held by the MIT Initiative on the Digital Economy. The topic was regarding the potential of computers replacing knowledge workers. The talk featured the authors, Erik Brynjolfsson and Andrew McAfee, “The Second Machine Age.” Brynjolfsson and McAfee are MIT professors, so, much more credible than I on this topic. They assert that now machines are becoming capable of taking over the work that highly educated people have been doing: perhaps the last refuge of human supremacy over machines. The article cites Ken Jennings’ Final Jeopardy answer while facing off against IBM’s Watson, “I for one welcome our new computer overlords.” The article also discusses several other examples of machine learning that can do what very highly specialized humans do, including tax preparation, accounting, tissue sample evaluation, and language translation.

 

I also recently read an opinion page article along the same lines. The article essentially said that machines are now able to learn faster than people. So, the old advice to retrain may no longer be useful. It will take us longer to retrain than it will take to train machines to do what we would retrain to. Perhaps humans will be better able to choose what skill to retrain to, but once they begin to fill a new knowledge worker niche, the machines could be trained to fill the niche in less time the humans took to learn the new skill. So, the time to recoup the investment for the human to retrain probably is too short to payoff.

 

This becomes outsourcing on steroids.

 

So where does that leave us? What does it do to our economy? Will we reach a utopia where all humans can live off the largesse and surplus of the machines’ work? Or, will we fall into a dystopia as envisioned by Karl Marx where the capitalists take all the value of the machines’ work and drive the value of all the workers’ work down to 0? Where nearly all labor becomes low skill and low value? Where even being highly educated labor is no refuge?

Tyans, and Googles and Power! Oh My!

IBM helped start the Open Power Foundation last year, including a variety of leaders in the Open Systems space. They included Google, Nvidia, Tyan, Samsung, Ubuntu and others. Last week, the Open Innovation Summit, some of the leading companies appeared together to discuss their innovations around the Power 8 processor.

Gilead Shainer, vice president of marketing at Mellanox Technologies; Sumit Gupta, general manager of the Accelerated Computing unit at Nvidia; Gordon MacKean, engineering director for the platforms group at Google and also chairman of the OpenPower Foundation; Chuck Bartlett, worldwide technical support director at Tyan; and Brad McCredie, vice president of Power Systems development and president of the OpenPower Foundation, show off a motherboard made by Tyan for potential use by white box system builders.

A more detailed report on the Open Innovation Summit is available at http://www.itjungle.com/tfh/tfh042814-story02.html

Then, this week, IBM is detailing Power 8 servers at IBM’s Impact event in Las Vegas. Major parts of the announcement include Power 8 based 1 and 2 socket, 2U and 4U rack mount servers. IBM has designed these servers for today’s new workloads such as analytics and big data as well as cloud. To do that, the processor and server have dramatic increases to the memory and I/O bandwidths. IBM plans to take the processor fight to the streets with its initiation of the Open Power Foundation, along with these technical advancements.

Some of the key new features include:

  • Double the I/O bandwidth compared to Power 7 – this level of I/O is 4 times the bandwidth of similar x86 designs

  • 4X the threads of x86 processors

  • KVM for Power

Due to these technical achievements, IBM says it is achieving dramatic performance increases. One example is a tripling of performance on analyitcs workloads compared to industry standard processor-based Hadoop clusters.

IBM is making presentations at its Impact conference today in Las Vegas.

You can watch IBM’s executives describe the announcement at: https://engage.vevent.com/index.jsp?eid=556&seid=68651&code=repflyer

Who am I?

ImageHello, and welcome to my blog. Let me first introduce myself. I am a graduate of The Ohio State University and current employee of IBM. I am trained as an Electrical Engineer, so I am kinda geeky, but I have interests that go well beyond technology. I plan to use this blog to  present ideas and connections that I see cropping up around technology, philosophy, religion, economics and the occasional sports situation. The posts on my page are purely my own and do not reflect the opinion of my employer or any other innocent bystanders. I hope you will enjoy my musings, strange and varied as they may be.

As a first point of interest and a random sports thing, my favorite teams are, of course, The Ohio State University Buckeyes, Minnesota Twins and Columbus Crew. I have a love-hate relationship with the Minnesota Vikings, the only 0-4 team in Super Bowls and victim of the Herschel Walker trade. I also like the Green Bay Packers, having lived in the Milwaukee area in the glory days of the 60s. Of course, liking both Green Bay and Minnesota is its own quandary.