Business

Learn, Observe, Control: UChicago To Commercialize A Generalized Software Optimization Framework

On its face, Ryerson Physical Science Laboratory seems like an unlikely place to find a new technology that has the potential to change the world. Sitting on the main quad of the University of Chicago, the building’s hallways and classrooms could be a movie set for a film set in the 1950s. There is a wood-paneled student lounge, and the whole building smells a bit like an old box of pencils. It’s part of a dwindling breed of academic buildings.

Ryerson is currently home to UChicago’s computer science department, which, at least when I was an undergraduate there, had the perception of being a bit stodgy and overly theoretical. What may have been true just a few years ago is changing very quickly today. The department has made a few important hires. This includes Intel’s former VP of research Andrew Chien as a professor and director of the CERES Center For Unstoppable Computing. Michael Franklin, formerly the chair of Berkeley’s top-rated computer science department, was hired as UChicago’s department chair. This has set the university’s computer science department on a path toward being, in Franklin’s own words, “world class” in theory and applied science.

Follow Crunchbase News on Twitter & Facebook

In 2013, UChicago’s computer science department also brought on another person, Henry “Hank” Hoffmann (pictured below). Michael Franklin told Crunchbase News that Hoffmann was brought into the computer science department when it built out its computing systems research group.

Among other earlier projects in his career, Hoffmann helped to modernize billion-dollar radar systems to detect and intercept hypersonic ballistic warheads at MIT. Hoffmann also built a programming interface for one of the first commercially-available multicore processors at Tilera, a startup also spun out of MIT. But Hank’s most important work to date is SEEC, which was the basis of his Ph.D. dissertation.

Hoffmann’s framework for so-called “Self-Aware Computing Systems” (e.g. SEEC) was deemed a “World Changing Idea” by Scientific American in 2011, and now in 2018 this idea may change the face of the tech business.

“Hoffmann’s work is one of those all-too-rare examples of something that pushes a field forward, academically speaking, that also has significant commercial promise,” said Michael Franklin, whose storied lab at Berkeley created the popular Apache Spark framework.

“I feel like I could never call it [‘the Hoffmann Optimization Framework’] in an academic setting because I think people would just laugh at me. […] As long as I’m not specifically quoted as using my own name to refer to the framework, we’re good,” Hank said in an interview with me. To be clear, Hoffmann doesn’t seem like the type of person to brand something with his own name. That’s been attached to the framework by his commercial partners, but mostly in the form of its initialism: HOF.

Why Hoffmann’s Optimization Framework Matters

Machine learning is eating the world. It is so prevalent in Silicon Valley tech startup pitches today that it’s basically become an inside joke. Seemingly everything needs to be controlled by a neural network, the deeper the better. But machine learning techniques are being used in all kinds of applications that are less flashy than some whiz-bang tech startup.

Neural nets are optimizing everything from cancer diagnosis to pneumatic braking on freight trains, and everywhere in between. All because of some fancy linear algebra, a computer is able to develop and evolve statistical models of how systems operate and then perform actions based on those models. But that learning process takes computational resources, lots of high-quality data, and, most critically, time.

Re-training a system in the face of new information – like a shock to the system, or encountering something that’s completely different from what the neural net has been trained to understand – is not always feasible, and certainly not in situations where speed is important.

“The Hoffmann Optimization Framework is the world’s only AI ‘insurance policy’ that can optimize and supervise any legacy or new system to guarantee performance to your goals, fully responsive to what is both known and unknown,” said Lester Teichner, Partner in the Chicago Group, which has partnered with the University of Chicago to bring Hoffmann’s work to market.

So, if neural nets are good at developing a statistical understanding of a system, why not turn that gaze inward and learn about the system itself and how it works best, such that when something changes, it can respond?

Making Machine-Taught Systems “Think” On The Fly

The Hoffmann framework is able to extract additional performance from complex systems that have already been optimized using machine learning, narrowing the gap between the absolute best-case scenario and what you can actually achieve in practice.

It does so dynamically, and in real-time. HOF ingests data produced by a given system and makes on-the-fly adjustments to how that system operates in order to maintain the best level of performance, even in adverse conditions. The framework is able to deliver performance guarantees formally proven with cold hard math.

And, perhaps most importantly, the framework is abstract enough to be applied to basically any complex system to make it perform better. And it does so rather unobtrusively, by simply sitting atop an existing system.

The framework can be implemented by software engineers with relative ease through a software development kit (SDK) developed by Hoffmann and his research group. Hoffmann said, “One of the things I’ve wanted to do is remove as many parameters as possible from users, so that users could benefit from the combination of control systems and machine learning without having to learn how the control mechanisms are working at the operating system and hardware level. At that point you’re just swapping one problem for another.”

Self-Aware Computing In Practice

I’ve been privy to a few results from tests of the framework, so here’s what that looks like in practice. This is where the “general” in “generalized optimization framework” really comes through, because it’s been implemented in many different kinds of application areas with generally impressive results.

Here are a few examples of its proven results today:

  • An interesting implementation in autonomous vehicles, described later in this article.
  • Hoffmann’s framework was recently implemented on Argonne National Laboratory’s Cray XC40 supercomputer running a popular (at least in academic circles) molecular dynamics simulator called LAMMPS. The optimized system produced “average increases in analyses completed of about 30% compared to the current state of the art,” according to Hoffmann. (Let that one sink in a moment.)
  • Implemented in a generative adversarial neural network learning a set of training data, the framework helped to produce results that were materially more accurate than the control, according to the tester.
  • In laboratory conditions, on phones with multicore ARM architectures, researchers using HOF achieved “2x” energy savings over the control sample by changing how the phone’s operating system allots resources during computationally-intensive tasks.
  • In a paper presented this year at ASPLOS, one of the most prestigious academic AI conferences, an implementation of the framework eliminated out-of-memory failures on test installations of Cassandra, HBase, HDFS, and Hadoop MapReduce. The framework produced a set of configuration settings which delivered better performance than even expertly-tuned settings, and this was all accomplished by changing as few as eight lines of code.
  • A chipmaker said that the framework’s results are equivalent to a generational upgrade, delivering next year’s performance expectations today.

“We have dramatically improved performance and lowered energy consumption in every instance we have implemented so far, seeing improvements of 20% or more across the board.” said Lester Teichner. “The Framework is widely applicable and has been deployed on mobile and server CPU and GPU chipsets, and on autonomous vehicle platforms.”

Ghost In The Machine

For a DARPA-backed trial, and in collaboration with Adam Duracz from Rice University and Jason Miller at MIT, Hoffmann’s framework was installed on a computer encoding video from a camera mounted to the top of a car. The encoder was set to output 20 frames per second at a specific resolution, and these targets were displayed on charts below the video for me to watch. To test the effectiveness of the framework at delivering performance guarantees in adverse conditions, the demonstrators began to kill off CPU cores and slow fan speed on the computer to simulate degraded performance one might encounter “in the wild.”

The lines on the charts began to deviate from their targets, but very quickly – like, within a second or two – returned to basically normal performance as the framework re-allocated its remaining computing resources on the fly. At first the framework overshot the target, then undershot it, but it quickly converged on the pre-set performance targets: 20 frames per second. It wasn’t until the computer was really crippled that the video began to pixelate and skip frames, but despite the juddering and the boxy-ness, the video encoder kept working. Without the framework’s supervisory and control functions, the system just crashed at the first sign of trouble.

In that demonstration of the framework being applied to video encoding I was able to see the ghost in the machine – the mechanistic “self” in self-aware computing –  turning the virtual dials to keep running. It was both fantastically interesting and very spooky to watch.

Hoffmann affectionately described the successful test as one of his “proudest moments” as an academic.

If you’ll forgive the automotive pun, imagine a few years down the road what this capacity to self-heal will mean to autonomous vehicles. If a patch of sensors in the LIDAR device on top of the car malfunction or a tire pops while on the road, Hoffmann’s framework could detect this failure and reallocate system resources to keep the system up and operating at its best, even in adverse conditions. And the fact that this adjustment happens so quickly matters, particularly in the case of autonomous vehicles which could be moving very fast.

Rather than asking a machine-taught system to re-learn everything when something unexpected happens, HOF takes what the system already “knows” about its limits, its theoretically optimal behavior, and the variables (the “virtual knobs”) that affect its performance to find the best way to operate given the context of its environment.

Our hypothetical car mentioned earlier wouldn’t need to re-learn how to drive with three functional tires, or how to “see” with partial blindness. A drone that’s far from its base could operate at a different speed or trajectory to ensure that it has sufficient battery life to make it home. It wouldn’t need to re-learn how to fly, this time with half a battery. Multi-core computers, or servers in a data center, could reduce energy expenditure by more efficiently allocating computational resources or adjusting fan speeds under heavy load. In a complex system with many layers of subsystems, having HOFs all the way down the stack would make the whole system run more efficiently.

A Generalized Approach To Commercializing A Generalized Optimization Framework

UChicago, in partnership with the Chicago Group, has begun the process of taking the framework to the broader market.

Hank said he was “lucky” to have previous startup experience working on Tilera, which showed him the value of focusing on one specific application of a particular technology. But for the optimization framework, he said “I wasn’t even sure that starting a company was the right idea, because I thought that this technology was useful in a lot of different contexts. The idea of spinning up a company and having to pick just one of those contexts didn’t seem right. This technology could be in phones, or in data centers, or in a bunch of different areas. What if we picked the wrong one?”

So, in turn, this broad licensing approach is itself a generalized, abstracted way to commercialize a generalized optimization framework. “We eat our own dog food,” quipped Hank, who concluded “We believe in generalization.”

Crunchbase News has learned that the Chicago Group is in discussion with several major internet companies, automakers, semiconductor producers, and electronics manufacturers in the US and abroad. They have also approached a number of venture capital firms to make the optimization framework available to their portfolio companies.

After seeing what this seemingly simple framework is capable of, we’re also left wondering what other technologies are hiding in the messy corners of professors’ offices and research labs both in dusty old Ryerson and at other institutions. In the struggle between research institutions and private industry to create the technologies of the future, academia scored a point today.

Update: We corrected the spelling of a surname following original publication. 

Illustration Credit: Li Anne Dias

Stay up to date with recent funding rounds, acquisitions, and more with the Crunchbase Daily.

CTA

Find the right companies, identify the right contacts, and connect with decision-makers with an all-in-one prospecting solution.

Copy link