The issues are enormous, the implications profound. What we’re being told is incredibly exciting. But some really important questions seem to have been left unanswered

What unarguably appears to be a massive breakthrough in computational power has just been announced by a team of computer scientists from Carnegie Mellon University. You’d have some serious difficulty in finding an aspect of our lives that this announcement won’t affect.

Examples of what this development does affect include everything from transportation, energy, telecoms, manufacturing, computer graphics, imaging, scheduling and logistics. They even include suggesting your next Netflix movie.

There are no obvious credibility problems for this story. The previous holder of the speed record for solving the problem in question is among those lavishing praise upon the achievement in no uncertain terms: “There is no other algorithm that runs at even close to this speed. In fact, it’s impossible to design an algorithm that will be too much faster” says Professor Daniel Spielman of Yale University (a former student of Professor Gary Miller, the new record holder) .

But as far as I can tell, some important questions do not seem to have been addressed, either in the announcement, or in the paper which it announces. This is a new algorithm. Where is the computer program that demonstrates it and allows us to compare its performance with what it supersedes?

How come such an important development in computer science can be made without a demo that we can see? Sure, I can imagine that there might be a perfectly good justification for this omission. But it looks to me as if it is curiously absent from what has been released so far. If there is a good reason why there isn’t a demo, we’d like to know what it is.

Another question is concerning the speed up itself. That speed-up factor of a billion times is in comparison with a 2,000 year old method of doing the relevant calculations. Spielman’s speed record from 2006 is, I’m sure much faster than the ancient method, but it is not at all clear (except in obscure terms unintelligible to the non-mathematician) how much faster Gary Miller’s Carnegie Mellon approach will be than Spielman’s.

It is as if (and yes, my comparison is pretty clumsy and over the top here, perhaps you can do better) a new car with a record-breaking top speed was being announced, but when asked what the speed was going to be, we were being told that it was going to be remarkably faster than a horse. Having said that, this announcement unquestionably looks mind-bogglingly exciting: “it may soon be possible for a desktop workstation to solve systems with a billion variables in just a few seconds”. I just don’t know how much faster that new capability will be when compared to what I can do at the moment, using what we have now.

Something tells me it will be much, much faster. Perhaps just not quite a billion times faster.

Don’t get me wrong, the press release is actually pretty exemplary, presenting the story in a refreshingly crisp, clear and objective way. It was just that this is such an awesome-sounding announcement that I needed to know even more. It’s probably just me. Maybe I’m expecting too much.

Here’s the announcement. It’s on the Carnegie Mellon University website. I’ve only included the beginning here. You can read the rest of it on their page, here. Carnegie Mellon Researchers Break Speed Barrier In Solving Important Class of Linear Systems.

Computer scientists at Carnegie Mellon University have devised an innovative and elegantly concise algorithm that can efficiently solve systems of linear equations that are critical to such important computer applications as image processing, logistics and scheduling problems, and recommendation systems. The theoretical breakthrough by Professor Gary Miller, Systems Scientist Ioannis Koutis and Ph.D. student Richard Peng, all of Carnegie Mellon’s Computer Science Department, has enormous practical potential.

Linear systems are widely used to model real-world systems, such as transportation, energy, telecommunications and manufacturing that often may include millions, if not billions, of equations and variables. Solving these linear systems can be time consuming on even the fastest computers and is an enduring computational problem that mathematicians have sweated for 2,000 years.

The Carnegie Mellon team’s new algorithm employs powerful new tools from graph theory, randomized algorithms and linear algebra that make stunning increases in speed possible. The algorithm, which applies to an important class of problems known as symmetric diagonally dominant (SDD) systems, is so efficient that it may soon be possible for a desktop workstation to solve systems with a billion variables in just a few seconds. The work will be presented at the annual IEEE Symposium on Foundations of Computer Science (FOCS 2010), Oct. 23-36 in Las Vegas.