It is the brave men and women fascinated by analog-to-digital (A/D) and digital-to- analog (D/A) conversion that make audio a reality today. Dan Lavry, founder of Seattle-based Lavry Engineering, put his music bug and his science bug together early on. Now he spends his days ensuring that sound will come in and out of digital devices as transparently as the latest components will allow.
It is the brave men and women fascinated by analog-to-digital (A/D) and digital-to- analog (D/A) conversion that make audio a reality today. Dan Lavry, founder of Seattle-based Lavry Engineering, put his music bug and his science bug together early on. Now he spends his days ensuring that sound will come in and out of digital devices as transparently as the latest components will allow.

When did you first become interested in conversion?
Until the mid-'80s I was designing converters for different applications that weren't audio-related: medical electronics (early MRI machines), instrumentation and weighing scales for letters. Converters are simply the bridge from a world that is analog — such as blood pressure, the weight of a letter or sounds — to a world of technology that is built around computing. The reason conversion becomes so important is because of the shortcomings of analog. Before we got so computerized we had very low quality memory, and even today we don't have very good analog memory. In the past, for example, we put information on tape, which gets demagnetized on a car dashboard during a hot day; or vinyl, which is great until you wear it out or it gets scratched. Analog memory is very fragile. A lot of the analog memory shortcoming issues are solved in the solid world of numbers, which is where computers operate.
When did you first become interested in audio conversion?
When it comes to audio conversion, I had a lot of freedom in a couple of areas. One is that the absolute maximum peak level is not that critical. Usually you have some type of volume control anyway! You let the user change it so absolute maximum is not that critical. Also, the DC offset is not all that critical either (relative to many other applications). But audio conversion tends to be extremely demanding because of the dynamic range. The ear hears a very huge dynamic range. If you agree that a person can hear a 120 dB range — 1,000,000:1 range when you use common numbers instead of dB — then you can understand that the ear is extremely demanding. It was a big fight to get more and more dynamic range because the early converters were weak in that department — it was a struggle to get 16 bits. The other challenge is to achieve ultra low distortions. So I got interested in audio conversion because it was about music. I'm a musician and I play music every day. So here it was: an opportunity to combine what I like doing a lot, which is design, with another thing I like a lot, which is music.
Without getting too scientific about it, explain briefly what A/D and D/A converters are actually doing in the signal path.
Outside of audio, we are living in a world that is basically analog. There will be some people that will argue that everything is quantized, but those people are physicists looking at dimensions that are so small that it's not anything we would normally relate to on a day-to-day basis. At the scale of what we deal with — for all practical purposes — sound is analog. If we vibrate strings, play a piano, a drum or flute we have analog signals. We are making a motion and the motion is not broken into little steps, it is a continuous motion. The sound is air pressure variation, which we convert to voltage with a microphone. If the signal is weak, we increase it with a mic pre. If we choose to stay in the analog world then we embed it into vinyl or tape. We then pick it up and convert it by taking an electric signal and converting it back to an air motion with devices like speakers or headphones. But there are some advantages to embedding the audio signal in a digital format: You can store digital on a hard drive or on a CD — the signal becomes a sequence of samples that you can send over the Internet easily; and it won't deteriorate much. In the case of an analog signal: if you smear it a little bit you've distorted it, altered it. If you send analog over a transmission line it will be changed because the line will distort it. With digital you have immunity to all that stuff. All you have to make sure of is that a zero will be recognized as a zero, and a one as a one. Digital offers a very robust way of doing things. You can duplicate it, transport it, store it — a lot of things that analog doesn't let you do. So basically the A/D converts the (analog) audio to digital so you can store it, and do whatever you want in a very predictable and repetitive way. Then at some point with the D/A you have to convert the digital back to analog so you can hear it. Sound is not zeros and ones — it's varying air pressure. I always like to say that my philosophy is one of fundamentally taking the same air pressure variation that the musical instrument put out and accurately tracking down the conversion, amplifying it very correctly without making changes, then converting it back very carefully to minimize alteration of the waveform. Remember, the waveform is like an oscilloscope picture of signal changing over time. That has to be guarded very carefully, because if you change it you change the sound. It's okay to do that if it's intentional, and they have tools for that — EQ, flange, reverb, compression — but not everything is there to change the sound. Some things are there not to change the sound, and if you let them then you lose control.
What made you decide to start building your own converters? What was your first unit?
Before I made converters for my own company I made them for other companies. My earlier converter designing took place when I was working as the chief engineer for Silicon General, a Silicon Valley company. We were an engineering contractor on a very large scale, and when a chance to do digital audio came in I proposed to do that. I made a 16-bit A/D there, and we sold a lot of them to Ampex and some to Otari. I also made a D/A, which eventually found its way to Mark Levinson at New England Digital (maker of the Synclavier). After that I joined Apogee. At the time they were making anti aliasing filters, but they knew the time for the product line was up. They wanted something different, so I designed the first A/D and the first D/A. In the meantime I was experimenting with Nyquist dither on designs that would become my next converters. That was marketed as UV dither. After that, things were not working out for me at Apogee, so I went out on my own and decided to aim at the high-end market.
Is that when you founded dB Technologies?
Exactly. In 1993 I had been off the scene for a couple of years, busy designing better dither based on a newer concept (noise shaping), a sample rate converter and more. I came out with our first product, the 3000 (now the LE3000S Digital Finalizer). It had a very, very good sample rate converter, digital algorithms, a lot of things — it was a Swiss Army knife for audio. I was also working on my super A/D, and in 1995 at AES I introduced the first AD122. It was a real 20-bit converter, and real 20- bits translates to 122 dB dynamic range, so I called it the AD122. I took it to Sony Music Studios where they did a test recording of Yo Yo Ma, and they were floored by the sound of those converters.
What did people think was interesting about your own particular approach?
I knew that I wanted to get into the audio business and I didn't have any interest in getting into the low end. The place to go is for the highest quality and the highest performance. The people who want the highest performance — and there aren't that many of them out there — they will find you. When I broke the 120 dB barrier in 1995 other manufacturers were way behind, and most of them are still not there today. The next year at AES people were selling 24-bit converters. My last 4 bits (out of 24 bits in the format) were noise — the competition had 7-8 bits of noise! One should make a distinction between the "number of bits" and the "number of real bits." Take a 12-cylinder car where only 8 cylinders are connected to the drive shaft. Is it a 12-cylinder car or an 8- cylinder car? You can say it has 8 "real cylinders" plus 4 useless cylinders. Similarly, you can have 24 bits but how many of them are "connected" to the sound? There is no such thing as true 24-bit conversion and there won't be in my lifetime. I'm talking conversion bits here, which is different from processing bits. That's a big distinction. When you do processing you need more bits. Why? If each channel has a range from 0 to 1 million, then the sum of all the channels will range from 0 to 16 million, (requiring 4 more bits). Say you want to attenuate the range of a channel by 2. The "steps" are no longer whole integer. The smaller steps (quantization levels) also call for more bits. So for processing we want more bits to express both bigger numbers and smaller numbers. But at the end we scale back the number of bits. The extra bits served their purpose during processing, but the engine can never yield more dynamic range or less distortion than what was fed to it by the converters.
How does Lavry Engineering build its units today?
It's all done in the Northwestern United States. Everybody that is associated with my company has to feel good about being alive. Having a positive work environment is very important to us.
How do you and your team decide what type of unit to develop next? What is the market calling for?
Our new AD10 is a little bit of a deviation for Lavry Engineering. It brings a very high performance at a lower cost than compared to other units from other manufacturers. It has much more accuracy for a fraction of the price. I also added some colorization as a choice in the AD10, introducing our unique Digital Alias-Free Emulation modes, which allows users to select the input characteristic of Tube, Transformer, or both — which is called "Complex" mode. I did this because a lot of people in the industry are saying, "Sometimes I like colorization." My philosophy has been perfect transparency, but in the AD10 I figured out how to do distortion-free colorization, which is not so easy to do in digital. That's why we call it Digital Alias-Free Emulation. It is very deliberate.
Once installed in their studios, how can people ensure that they get maximum performance from their converters?
The first thing is that one has to overcome the unbelievable amount of hype in the industry and realize that — whenever possible — they should use internal clock. Low-jitter is generally better — more jitter is bad. As a rule, internal clock yields less jitter, so the best thing is to use internal clock whenever possible. There are times when one must use external clocks (such as the case of many channels). When doing so, the jitter goes up and the sound quality suffers a bit. That is a compromise we have to live with. Just do not fall into the false hype claiming that external clock improves your sound. Another thing: you want to use most of the A/D range, but not hit 0 dB (full scale). If you work too far away from full scale you are wasting dBs (dynamic range) without gaining anything. I prefer people don't hit 0 dB because that's clipping, and that makes the sound less than ideal. Many audio people are willing to trade clipping for the sake of making things louder. If they want to play that game they can set my AD10 to "Transformer," which makes excellent sound and gives distortion that won't sound bad. Also, when it comes to A/D keep as many bits as you can, and when you have to reduce your word length (for reasons such as a final format) use dither with noise shaping.
What is going to keep converter technology and quality moving forward, and how are you applying that in what you do?
This is a question that has to be turned upside down. I'm a little concerned about the ability to keep converter technology quality up, because the technological world is not dictated by audio. It's dictated by the Internet, computing, consumer stuff and industrial stuff — all towards technologies that are not friendly to analog. The tech world has been moving to lower and lower voltages and smaller size, as well as higher integration (putting audio into digital computing ICs). Those trends go against analog quality. We all have to guard against losing the battle of good conversion and good analog. The challenge is to figure out a way to keep the technology good, despite the fact that less and less hardware is being oriented in that direction.
I take it you feel you're fighting the good fight?
I'm fighting a fight! Sometimes it gets easier, sometimes it gets more difficult.