Wayne Peet is an accomplished pianist/keyboardist and a first call player in the jazz/new music/avant-garde scene in Los Angeles, but he's also known as an excellent recordist, mixer, mastering engineer, and studio owner. The list of artists he's recorded spans genres and styles, including: Vinny Golia, Bobby Bradford, Nels Cline, Kenny Burrell, Bennie Maupin, Thurston Moore, Steuart Liebig, Mike Watt, Alex Cline, G.E. Stinson, Gregg Bendian, Lydia Lunch, Carla Bozulich, Scarlet Rivera, James Gadson, Louie Bellson, Joseph "Zigaboo" Modeliste, Jubilant Sykes, Robert Edward Thies, Pat Metheny, and Hubert Laws. Nestled in Mar Vista, between Venice and Culver City in Southern California, his Newzone Studio is conveniently located on the property behind Wayne's home.
I was born in Dallas, Texas. I was only a baby when we moved outta there, but it gives me the Texas cachet if I need it with people. I grew up in Oregon, Washington, Idaho, and California, but my parents were both from the Southern California area so I kind of gravitated around that and always wanted to end up back in Southern California. I came back to California when I was in high school out in the Imperial Valley, and then I went to college in Santa Barbara at Westmont College. John Rapson [trombone, composer] also went there. He was like one year ahead of me. I had gone to that college partly because I saw a band that he fronted there, when I went up for college day, that kind of interested me. He was forming [a band] and I got in it. We both played trombone and piano. It ended up that he played trombone in this group and I was playing piano. It was a loose jazz collective kind of thing. It was called the Frobisher Hall Art Ensemble. We spanned the range of '60s, and earlier, Miles Davis stuff to really outside more free kind of stuff influenced by Anthony Braxton [saxophone, composer], the Art Ensemble of Chicago, and those kind of groups. So, it was a pretty interesting time. Then in school I was studying classical [music]. I was concentrating mostly on piano performance and composition. After that I ended up in L.A.
I've learned a lot of stuff about how to EQ and do different things to digital to make it "analog pleasant." A lot of that has to do with cuts between 2 [kHz] and 3 [kHz] and some of the harsh areas. Once you cut some of that stuff it's just more pleasant to listen to. You're just trying to end up with something that sounds good. Of course, analog and digital are very different, but if you took the exact recording of the same music that was recorded [in] analog and digital, gave it to say a [skilled] engineer, he'd mix to make it sound like he wants it to sound. If the analog is less harsh and the digital is more harsh, he's going to cut down the harshness of the digital in the mix, and he may boost the high end in the analog if he feels it's too dull. So, what comes out the end from those two things won't be identical, but it's going to be a lot closer than the original [analog and digital versions the engineer was given]. So, when people talk about just the technical aspects of an original recording, it's all valuable to know and talk about, but it doesn't mean it's going to end up sounding like that. When you mix, you're doing things to make it sound good and you're going to do different things if it's recorded differently. If it's a more roomy recording and you're having a problem with it being so roomy, you're going to be EQ'ing to try to control that roomy nature and get some presence. If you got a recording that's super dry you're going to add some ambience. So, the beginning state of all these things is not the place to base you're argument because you're going to feed them through your "mixing tube" and it comes out the other end not the same as the original.
We heard some mid '60s Miles [Davis] on CD. It was one of these remastered deals and I'm listening to it and something's just really wrong. It was Miles Smiles. I wore that record out on vinyl. I know every corner of that record. I'm real familiar with it and something was really bothering me. Then I figured what it was. Too much bass! Whoever remastered it didn't understand the music. Probably somebody that was younger that didn't grow up around that stuff and was just going, "I can't hear the bass. I can fix that," and he brought the bass out. So now you can hear Ron Carter [acoustic bass] all through it and it ruined the groove. What was happening was Tony Williams [drums] is really on top of the beat all the time and Ron Carter is behind. He's playing on the backside of the beat. When the bass is kind of buried in the mix, it totally works! That's what I'm used to hearing and I'm sure that's what it sounded like in the room. All of a sudden you make it into this modern thing [and] it lags the whole time because Ron Carter is playing on the backside of the beat. I've done remastering and it's always a compromise because we have technology now, but it might be [used] wrong. On the other hand, I did a bunch of Art Pepper [remastering]. Again, I kind of got the gig because I know music. We have a lot of these board tapes and the tape runs out. It's just a cassette that was going, but he's dead now so that becomes a valuable document. Luckily we've got outtakes from the studio [too] that weren't released. I've done a lot [to the] funky board tapes to get them releasable. People are forgiving about that stuff because they're valuable artifacts. They're not going to go, "This sounds like a cassette!" I'm valuable to [the project] because I understand the form. Say the tape runs out and they flip the tape, I can cut that together because I know where the choruses start and then make it one continuous thing. If you don't really know music you wouldn't know how to do that. At the same time, you try to be respectful for the music the way it was, but you're trying to help it, too. There's no sin in making it more listenable.
My other little rant is about headphones. Once you get into isolation the headphone thing is really critical. Obviously, if you're in the room [with the other musicians] and the headphones suck, you can crack one ear and you can hear what's going on. But if the bass is in a booth and the drums are in a booth, you're really depending upon your headphone mix. One of the things I did initially was try to get individual headphone mixes for every person and I've been expanding on that. I don't think headphone mixers for each musician is a good idea. It can be okay but, A) it's more expensive, and B) it's out of my control. As an engineer you realize that if somebody says I want more bass, they may not want more bass. They may want less of everything else and then they'll hear the bass better. If you know what you're doing you can interpret what people ask for and give them what they really want. I've talked to a number of engineers that talk about those individual mixers and they say if you go out at the end of the session you'll see everything is turned up! It's the same with EQ and anything else. It only goes positive — it never goes negative. If somebody is listening to a mix, they'll always ask you, "I want more X." Sometimes that means that you turn up X. Sometimes that means you've got to turn something else down. The same with EQ. Negative EQ is the best thing rather than boosting and I've learned that's true over the years.
Horns & Mics
I generally come straight from the bell, but not real close to it if possible. If somebody is alone in a room you can mic it from across the room, you can mic it right [up close], you can do whatever you want. If there are three horn players standing right next to each other, you've got to get a little closer if you're trying to get isolation on those tracks. I almost never use generalized mic [placement]. Just as a sideline, all things being equal in terms of good gain staging, a decent mic, and mic pre, by far the biggest difference is not going to be switching out a mic or switching out a mic pre, but your mic placement. Nobody sits around and argues about mic placement, but they'll sit around and argue for days about mic preamps, mics, and what kind of cable you're using. All definitely make a difference, but in the scheme of things you've really got to go for what statistically really makes a difference. If you move [a mic] by an inch the sound completely changes. I mean really changes. Much more than if even you switched out a mic in the exact same position. So learning good mic placement, to me, is the most important thing because it makes the biggest effect in the signal chain. I've learned things about how to place mics for vocals to minimize plosives closeness and that kind of stuff. A saxophone or any woodwind, the sound doesn't come out the bell, it comes out all the holes and the bell. A brass instrument like a trombone or a trumpet, all the sound is coming out the end of the bell. So generally speaking, on a saxophone you just want to be out in front of it. One of the things you want to do is listen. You go and put your head around in the various places and then you find the sweet spot. The same thing with acoustic guitar. I always do that. I have the guy play and then you come in and you just kind of move your head around and you can find that place and then put the mic there rather than guessing. Kenny Burrell told me one of things he liked about Rudy Van Gelder [Tape Op #43] was that he would really listen to the sound when he was setting up a session. You know on jazz dates, they're not going to have patience like rock things-hours on a snare drum sound. [With] jazz guys it's like if they're 15 minutes into the studio they're wondering why they're not rolling. That's another aspect of my thing is I learned how to try to get stuff going quick because especially with jazz musicians they don't really have the patience for that type of long setup. They just want to play.
Compression & Jazz
Certain jazz players know about compression, but they don't know what it is. They know it limits dynamics so they [think they] don't want that. "I want all my dynamics in there!" I've had directives from some people; "Don't use compression on me!" It's just because they heard compression is limiting dynamics and they're thinking, "I want my whole performance in there. I don't want to turn down my highs," and that's a reasonable conclusion. However, it's a recording. That's why you can listen to a slammin' metal band, turn it way down so you can barely hear it, and it still sounds loud. It sounds loud. It isn't loud. And that's what a recording is. A recording can sound intense even if you have the volume turned way down, and something quiet sounds quiet. It has nothing to do with the dB, it has to do with the timbre. So, the thing about compression — probably everybody that reads Tape Op already knows it — is that it's a way of actually enhancing perceived dynamics because you can bring up your softer stuff. You're loud stuff is going to sound loud anyway just because the timbre of it is loud. The other thing about these jazz guys is that possibly they went with some rock 'n' roll engineer at some point who hyper-compressed their saxophone or voice or whatever and they didn't like that pumping sound. "Oh I don't want that anymore!" They don't realize that if you use it right it's pretty transparent and yet it will enhance your perceived dynamics. I've actually A/B'ed it — with compression and without. Almost one hundred percent of the time they pick the compressed [version] because it sounds bigger. It sounds better. Now if you're on the rock end of things you can do the hyper-pumped-compressed thing and nobody's going to care. It just becomes another part of the sound. If you're trying to maintain the classical or the jazz thing where it's more "natural" then you're going to want to use a transparent approach. But you're going to need do it right. A jazz record with properly transparent compression is going to sound bigger and you're going to be able to hear the subtitles of the acoustic bass. So that's my little rant. You need to use what you can to hear what you need to hear.