In 1999 Universal Audio, the company founded by the esteemed Bill Putnam Sr., was re-launched by his sons, Bill Jr. and Jim Putnam [Tape Op#24]. In addition to re-creating some of the classic audio equipment that UA was known for, the company is also focused on "powered plug-ins" — developing their own DSP digital signal processing card to run UA's VST Virtual Studio Technology plug-ins in a DAW scenario without taxing the computer's processing. Their first card, the UAD-1, was a success and has now been superseded by the UAD-2, with up to ten times the power of the original. Engineer and Chief Scientist, Dr. Dave Berners has been involved in this end of UA's work since it started, and has had a hand in designing many of their most popular plug- ins, including emulations of the 1176, LA-2A, Pultec EQ-1A and Pultec Pro, Neve 33609, Neve 1073, Helios Type 69, Fairchild 670, Roland RE-201 and the Precision Series.
Tell us a little about yourself and your journey in music and design.
Ever since I can remember, I thought I would probably become a design engineer — but I've always been interested in music as well. Growing up we had a piano at home, and I was usually glued to the bench. During college I began focusing on electric bass — mostly rock and some jazz. I've played in many original, as well as cover, bands over the last twenty years, and I am playing bass for Kevin Cadogan from Third Eye Blind, as well as a few cover bands, these days. I became interested in recording at Stanford University when I took a series of classes from Jay Kadis. It was an exciting idea for me when I realized that I could work as a design engineer in the field of music. I think having a musical mindset makes a big difference in my approach to design of recording equipment.
How did you get into DSP and working with Universal Audio?
I did my undergraduate studies [SB] in electrical engineering at MIT, got an MSEE at Caltech and did a PhD at Stanford, where I am a consulting professor. I first started doing serious DSP at Stanford's Center for Computer Research in Music and Acoustics [CCRMA], under Julius O. Smith III, and started working at UA in late 1999. I first talked with Bill Putnam Jr. about UA in 1998 or 1999, when we were graduate students together at Stanford. Bill asked me if I thought I would be able to use circuit analysis to create accurate digital/discrete-time emulations of analog audio equipment — in particular, vintage equipment. At first I helped out with specifications on parts for our (hardware) LA-2A re-creation, while working on DSP ports for some of the Kind of Loud plug-ins (Woofie and Tweetie). Then I began creating algorithms for plug-ins together with Jonathan Abel, who was our CTO [Chief Technology Officer]. Jonathan and I collaborated, or used each other as sounding boards, for virtually every early UA software algorithm. Additionally, I have worked on a handful of UA's hardware projects, the most recent being the 1176AE.
Why did Universal Audio choose to develop "powered plug-ins" with the use of hardware-based DSP?
One thing that's nice for us is that we can tell our customers how many instances of each plug-in they can expect to get when running on our cards. With native processing, instance counts depend on the type and speed of processor being used. For a company the size of UA, it's also nice being able to spend all of our resources optimizing our code for one processor. If we wanted to optimize our algorithms to run in nativemode on different systems, we would have to allocate more resources to optimization. We can create more plug-ins with the same number of engineers if they only have to be made to run efficiently on one platform.
The UAD-2 has now been on the market for a few months. How does it differ from its UAD-1 predecessor?
We have chosen to use the Analog Devices' SHARC [digital signal processing device] this time. One big advantage right off the bat is that development continues on the SHARC, so that we can expect increasingly faster DSPs to be developed in the future that will be able to run our plug-ins. This gives us a nice path for future products. The added DSP resources will definitely give us new flexibility for creating algorithms, with the Quad model having an average
ten times the raw power of the original UAD-1.
Take me through the basic stages of "modeling" hardware.
We start with the schematics, knowing that there will be some parasitic effects that do not appear in drawings. By taking data from the unit we want to model, we can find out what nonlinearities, self-resonances, etc. need to be included to match the behavior. From this data we create an extended model, based on the schematic, but including whatever significant parasitic behavior has been measured. Finally, we calibrate the component values in the model in order to match the measured behavior. In some cases we create algorithms to automatically optimize all component values to match our measured data as closely as possible. In terms of what to measure, that is determined first by the schematic; then, after discovering nonlinearities and parasitic effects, we define a second set of measurements that will expose the behavior we are trying to identify.
What are some of the hardest pieces of gear to emulate or model? Are reverbs easier than compressors or equalizers?
In terms of algorithm development difficulty, there are two steps in the process. The first is modeling the system to be emulated, and the second is implementing the model. For difficulty of implementation, here is the ultimate: circuits with 1) high bandwidth, 2) feedback and 3) nonlinearities. When all three elements are present in the same place that is difficult. Any two of those elements can be treated with various techniques, but all three at once pose implementation problems that are hard to solve. Usually systems that are difficult to model are nonlinear in some way. Many times difficulties arise in systems that are not purely electrical, such as luminescent panels, photoresistors, acoustic transducers, physical systems, etc. Nonlinearities are expensive to model in DSP, because they typically require oversampling to prevent aliasing. But some nonlinearities can be very nice; they add harmonic content to the signals being recorded. Our RE-201 plugin has these type of non-linearities built in and they will regenerate in a different way each time the plug in is used/opened, even if the controls are set the same. So, each instance will behave differently from the others.
I assume modeling the T4B photoresistor and its luminescent panel in the LA- 2A was difficult.
Exactly! Let's compare three different compressors: the LA-2A, 1176LN and Neve 33609. I would say that the LA-2A was the toughest of the three, in terms of producing a physical model of what's happening. The physics of the electro-luminescent panel and the photoresistor were difficult, and involved nonlinear systems which contained hidden states. Several different aspects of the compression were related to each other through these hidden states. So, development of an accurate model was an involved process. However, once an accurate model was created, implementation of the model was relatively straightforward. For the 1176LN, the (fast) time constants involved made it difficult to implement the system as a discrete-time (sampled) process. It was, in comparison to the LA-2A, a simpler model, but the signal processing required for the implementation was much more elaborate. For the 33609, the system model was not overly complicated by comparison, and the implementation required only one novel signal processing technique in order to work. But collecting the data necessary to characterize some of the nonlinearities present in the system required the development of some sophisticated measurement techniques. A hardware unit had to be modified in order to expose some of the behavior we wanted to characterize. Different parts of the process can be difficult, depending on the product.
So how do you go about selecting the "golden" unit that you choose to use as your reference model?
Sometimes it is difficult, even with the help of experts, to pick a single representative piece of gear. In some cases, we include a few choices for the plug-in to emulate particular examples of popular units. For example, our Plate 140 reverb plug-in includes models from the three EMT plates at The Plant Recording Studios in Sausalito, California. They sound significantly different from each other, but all have been well maintained.
The sonic accuracy of an emulation versus the sheer computational power required to model it is a factor you need to consider when developing plug-ins. How do you strike that balance?
When we were just getting started, we knew that we would have to provide a certain instance count for a plug-in to be successful. We could not disregard computational cost when developing the algorithms. On the other hand, we wanted to provide adequate quality — we did not want to be releasing "improved" emulations every few years. So, we tried to strike a balance. In terms of modeling hardware gear, there is always a "next level" of detail, albeit usually one that provides diminishing returns. If we had infinite computational resources, there are many, many additional elements that could be put into each algorithm. Usually, during development, we try a few different levels of detail. If some element makes a significant perceptual difference, we try to squeeze it in. Many plug-ins are released with an "SE" version, which uses fewer resources. We create such a version, if we can eliminate a significant chunk of computation with only a modest perceptual difference.
Has this model changed over the years? I've noticed processing power has been less of a concern with some of the newer plug-ins.
As time has progressed, we have started releasing some algorithms that would have been considered too computationally expensive when the UAD-1 was first released. Partly we have been able to do this because our customers have asked for faithful emulations, even at the expense of instance counts. But I think it is the right way to go no matter what, because we would like to keep the algorithms ahead of the technology curve. We know that instance counts will go up as we gain access to more DSP power. Compromising on quality would be shortsighted: who cares whether we'll be able to run 1,000 or 2,000 instances of the LA-2A plug-in in the year 2025? No one. But hopefully people will still care what it sounds like. We'd like to err on the side of fidelity.
How does the UAD-2 change this paradigm for you and your future development?
With the UAD-2, the extra DSP available will allow us to continue pushing the envelope in terms of algorithm complexity. We're already thinking about the next "universe" of algorithm development with the huge amounts of computational power that will some day be available. Factors of two or four in computation can change the details of an algorithm, but factors of 100 or 1,000 can change the entire basis on which algorithms are developed.
Now that you have the added processing power, do you plan to go back and look at the models and rework them for the new cards?
It's an idea that's been in the back of our minds. Most every algorithm we ever developed could be improved in some way, given enough DSP to work with. Sometimes improvements come in chunks — a certain quantum of resources must be gained before any significant improvement in quality can be realized. We have to pick and choose when would be a good time to develop an "improved" algorithm. We don't want to create confusion by having too many variants of an algorithm. If we continually fiddle with our existing plug-ins, it may lead people to wonder, "What was wrong with the old one?" With the UAD-2 Quad being ten times the UAD-1, in terms of DSP, it's easy to argue that several of the existing plug-ins could be revised without requiring unrealistic levels of DSP for a Quad system. And, we do have well-developed ideas about what would be the next step in improving the accuracy of most emulations. But we need to decide how to allocate resources between revising existing algorithms and producing new ones.
In 2017, one of my best friends, Craig Alvin [Tape Op#137], kept texting me about a record he was engineering. He was saying how amazing the process was, and how awesome the results were. The album turned out to be Kacey Musgraves'