How did you begin working with Roland?
I've been a musician since grade school. Trumpet was my instrument. I took to it really quickly – Music just became part of my DNA. Roland was always my favorite brand. In high school, I discovered this storage room that had a Roland SYSTEM-100 [synthesizer] and a TEAC A-3440 open reel recorder. Nobody was using it! Some of my first experiments were learning how to recreate Kraftwerk and Devo songs one track at a time, and then mic'ing a snare drum and whatnot. I just got hooked. A big part of it was Roland’s slogan at the time, "We Design the Future." For me, as a young musician and reading lots of science fiction, that slogan really pulled me in. For a couple of years right after high school, I was playing in bands. I wanted to do the rock star thing; specifically progressive rock. I ended up working in Vancouver's largest music store, and pretty soon was working with keyboards, synthesizers, and kind of the first stages of computers in music in '85 and '86.
Oh man, those must have been exciting times!
I mean, yeah. Super, super exciting times. Everything was just exploding and there was a new story every week. I'd come into work and there was something new from someone. And, of course, MIDI was only a couple of years old at that point, so it was starting to propagate and have its influence. Computers were coming in; first for sequencing, then digital audio with the [Alesis] ADAT and TASCAM DA-88 and things like that. It just so happened that the founder of Roland Canada was from Vancouver, so fortunately I got a chance to join Roland in 1992 as a Product Specialist, which meant I went everywhere – from music stores to libraries and museums – doing clinics and seminars, teaching people how to use this technology that was new to pretty much everybody. I discovered that not only did I really enjoy doing that, but I had a bit of a gift for taking really complex topics and distilling them down to language that very right-brained creative people could wrap their heads around. Eventually, I moved into marketing roles. Flash forward to 2008, and I was asked to take over the presidency of the Canadian company by the person who created Roland, Mr. [Ikutaro] Kakehashi. This was something I never had my hand up for – I was this synth geek, composer, sound designer, and wannabe rock star. But though the people in this role before me had come from a business or finance background, Mr. Kakehashi felt that my background in product and customer experience prepared me well.
How then did you make your way to the Roland Future Design Lab?
Eventually, I was asked to move to Los Angeles to help start the company's first global marketing organization. I did a similar thing, again, for what we eventually called "global customer experience," or GCX which is really everything that happens after you buy something, from support to service and so on. Throughout that time, several of the people who had gone onto become the president of the worldwide company were my friends. Many of them were engineers. One of those people, Mr. [Masahiro] Minowa, who's now our CEO, was who I reported to directly for a period of time. He and I had this kind of rolling conversation, which started with our mutual appreciation for that phrase that Roland had introduced in the '80s, "We Design the Future." He always felt that if we could ever make this our decision, we need to bring that slogan back. But make it more than a slogan – let's point to a real, solid reason as to why we're bringing it back. This conversation rolled nicely into a related topic – one of the long-standing problems that all tech companies face, Roland included, is that there are hundreds of engineers between hardware, software, platform, and core technologies, who are working on all of our products across all of our brands around the world. There's a necessary focus on the near term – say, a three-to-five year window – where people are proposing the ideas, building the roadmaps, designing and building the products that drive revenue, that form the lifeblood of the company. But it's difficult to look out past that. We felt that perhaps what was needed was a form of incubation zone, a sort of think tank, where people can experiment and where they can fail fast and fall forward.
It sounds like you're describing Bell Labs or Xerox PARC.
That's exactly right. The biggest, most successful tech brands of today – Apple, Meta, Google, etc. – all have something like this in one way or another. Some brands make these efforts more forward-facing, like for example Sony CSL (Construction Science Labs). But some of them keep this stuff more backroom, below ground. I started to lean into this idea and Mr. Minowa very much agreed and had his own ideas in this area, and a few months later he asked me to head what became the Roland Future Design Lab, which is now in its second year. Our objective isn't necessarily to build finished things that we commercialize. Rather, we scan the horizon and look for opportunities in emerging technologies, such as AI and machine learning, extended reality, the Internet of Things (IoT), and find a way to bring them into music creation. And one of the things that's really important for us is that, with anything we do, we try and get it into the public view as early as we dare – to build something to a functional but not necessarily finished form, get it into people's hands, and get feedback.
How does AI factor into what you're doing at RFDL?
I'd actually been thinking about artificial intelligence quite a bit, even prior to starting the RFDL. Back in 2019, I began contributing to something we called the ZENJIN Project which led to ZEN-Core, a new audio synthesis platform, which has since grown into an entire ecosystem that powers everything from synthesizers and digital pianos, digital wind instruments, electronic drum kits, and more – the idea was to develop this platform where the basic unit of sound that we call a "tone" could have basic compatibility across everything within the ZEN-Core universe, whether it's hardware or software. Two of the first ZEN-Core instruments we were working on were the JUPITER-X and the JUPITER-XM. We introduced a feature on them called I-Arpeggio, which used AI to create a new real-time arpeggiation function – it offered much more than the usual repeating motifs. That really got me and others thinking about potential future applications of artificial intelligence. Fast-forward to November of 2022 and the release of ChatGPT, and I was like, "Holy cow. This thing can produce art?" I was amazed and excited but also sort of horrified – I perceived it as much of an existential crisis as anything. It's like, if we – by "we" I mean "humanity" – don't think about the good and the bad of this, and we're just going to allow it to ingest all of the music that has ever been created, where does that leave human creativity?
That reminds me of this clip of David Bowie talking about the internet in the early-'90s, where he refers to it as an "alien life form" – something so big that we can't even begin to predict the impact it will take.
Exactly. AI is a co-intelligence that is distinctly not human. Experts describe artificial intelligence as a "general-purpose disruptive technology" – something on the level of the internet or the wheel, depending on how far you want to go back. So, myself and some people from IT and our cloud business division began collaborating on a document, trying to think of some guidelines with regards to how we might utilize AI; what our values and our principles might be. And then in August of 2023, Universal Music Group made this very high-profile announcement of a collaboration with YouTube around working together to build AI-powered tools for YouTube creators to help them make music. Sir Lucian Grainge, the CEO of UMG, was front and center talking about these three high-level guiding principles that were helping to shape their vision and strategic planning. During all of this, I started to think about, of all things, MIDI. How it was this moment where all these companies realized that it would be better for everybody if MIDI were open and free to use, and they worked together to make it happen. Because of that, here we are 40-plus years later and it's still absolutely ubiquitous – one could even say that it created an industry. So, I was looking at the UMG and YouTube collaboration, and the conversations we’d been having at Roland, and thought that maybe we could work together to approach AI in a similar fashion to which the music tech companies in the '80s had approached MIDI. Long story short, we – UMG and Roland – published Principles For Music Creation with AI in March of the following year.
It sounds like Isaac Asimov's "Three Laws of Robotics" but for music.
Yeah, I see that! A few of us think about Asimov from time to time – as we all should. [laughs] Just as Asimov's rules are for protecting human life, we're attempting to protect human creativity. We believe that not just the process of human creativity, but the output of human creativity, have value that need to be respected and protected. And these principles have informed our work at the RFDL. One of our first efforts at the RFDL was something that we call Tone Explorer, which uses neural processing to make sound recommendations for MIDI files or MIDI performances, based on its understanding of the musical characteristics of the MIDI data and the tonal characteristics of the massive ZEN-Core sound library. We've partnered with Qosmo and Neutone, early supporters of AI for music, on a number of projects, one of which was Tone Explorer. Their Morpho neural sampling engine was crucial to the development of the RFDL's next major effort, Project LYDIA. Project LYDIA is a "technology preview" – a proof of concept. It's this interesting, almost paradoxical joining of this extremely sophisticated neural engine, and the most intuitive, unintimidating, friendly presentation: a stompbox!
Right – at first glance this could have been made by BOSS.
It takes a lot of visible inspiration from that. What sets it apart from the typical stompbox is that it's actually a housing for a Raspberry Pi [computer]. And that Raspberry Pi runs the aforementioned Morpho Engine, which essentially works to take the tonal qualities of one sound and maps them onto another.
I saw Alfie [Bradic, Head of Audio at Neutone]'s presentation at NAMM, where he used LYDIA to turn a collection of metal vocalist samples into an effect that he ran his guitar through. I'd never heard anything like it.
Exactly. In the future a user can optionally supply their own audio to be used in the creation of a Project LYDIA model whose qualities can be applied in real-time to the sound being fed in.
That sounds computationally intensive. How's the performance?
The current version allows for good real-time performance. But we expect that we're going to be able to improve this performance by integrating the audio circuitry into the main design itself. In its current form, the audio circuitry is external to the box itself, so we connect a USB audio interface. Eventually all of that is going to be integrated.
I like that LYDIA's design is built on the artist explicitly giving consent for the model to train on their own data, locally. So rather than these tech giants scraping all the music in the entire world and making music without musicians, this design flips the whole thing on its head.
That's right, and it’s fundamental to our approach because it keeps the human creator in control.
It puts AI into the instrument level, the creative process level, rather than just usurping the whole creative process and jumping to the end result.
And it could create a market as well. In the future, should we bring this to market, perhaps someone could purchase models that somebody else created, enabling the creator to be compensated for the audio that they own, that they've created, that they've curated to train a model, which others could then use to make music with. In fact, our partners at Neutone already have this platform in place for their Morpho plugin.
It would basically take the business model of sampled instruments or sample packs and apply it to AI models. I guess that's why it's called "neural sampling"?
That’s an interesting thought. But what's also interesting is that when we showed it off at the Audio Developers Conference – a place disproportionately full of not only very technology-aware people, but AI-aware people – and at NAMM, the vast majority of people that tried it didn't even care and frankly weren't even interested in the fact that it was AI. They just saw it as a cool-sounding guitar pedal.
Is Project LYDIA going to remain a Raspberry Pi at its core?
At this stage? Yeah, it absolutely is. In fact, one of the things I was really excited to talk to Tape Op about was the DIY aspect. Many would define DIY as physically building things. Or, if we're talking about software, building Max for Live devices or instruments in [Native Instruments] Reaktor. And Roland hasn't been active in that kind of space for quite some time. But back in the early-'80s, we had a brand called AMDEK, which stood for Analog Music Digital Electronic Kit. We produced guitar pedals, a line mixer, a drum machine – all in kit form. AMDEK was eventually rebranded as Roland DG, which still exists today. But just as we looked back to the slogan "We Design the Future," this is another thing that we want to bring back and see if there's interest. Sometimes our best ideas for the future are rooted in things we’ve already learned and done!
Is a component of this potential DIY direction the hardware you've built around the Raspberry Pi, such as the 4-knob control unit? That's probably a GPIO [General Purpose Input/Output] hat, correct? You could potentially be making a whole ecosystem of Raspberry Pi hardware accessories for making music.
We were very intentional about using what is the de facto standard open source platform and then build off of it from there. You probably noticed that, in this first version, none of the connectivity is concealed. All of the I/O, including the SD card slot, is all exposed. The intention is that this is absolutely an audio platform. And so, if you'd like to run a different application, if you'd like to create a new hat or replace the encoders that are there, go for it.
What – to the extent that there is any – is your ideal workflow for working with Project LYDIA?
On one hand, there needs to be an immediately useful application – you approach it and with very little fuss or muss it just does something. It starts to inspire you. Hence the familiar stompbox form factor and concept of effecting an input. On the other hand, there's this other layer to it. It's like, "I know what you've sold this to me for, but I'm gonna immediately start to hot-rod it." And we want to embrace both. The founder of Qosmo and Neutone, Nao Tokui, wrote a book called Surfing Human Creativity with AI – A User's Guide. In it, he talks very eloquently about something that we think a lot about, which is the unintended use of technology. One of the examples he gives is the [Roland] TR-808 [drum machine], and how what it became famous for was absolutely not what it was designed to do. When Mr. Kikumoto, the leader of the project, first heard the "boom" that everyone associates with the 808, he was worried that he was going to be responsible for destroying speaker cones. But that's what ended up becoming loved about the 808. So, this idea of unintended uses, whether it be by accident or by someone saying, "Nope, I see what you want me to do with this, but I'm going this other way," is when something new and magical happens. We're fortunate that so many of the products that Roland has created have inspired genres, like the TB-303 and acid house or the TR-909 and house music. We designed the tools – it was the musicians who brought their agencies to this and brought their imagination to these instruments and to these platforms and created something really magical. So, we're thinking a lot more about this – Can we be more intentional in designing instruments that actually encourage that unintended use? Can we actually kind of present something as, "No, we want you to do what you want."
What's next for Project LYDIA?
We're working on the next version right now. Soon, we'll start showing it to the public and continue to learn. In Principles… we talked about the need to consult with artists and music creators and not just make assumptions about what we think is going to be cool or helpful or supportive or inspiring. Because this is a very disruptive technology, and we need to be responsible and ask the creators what they think.
After I finished this interview, the updated version of Project LYDIA that Paul spoke of in this interview debuted at Superbooth in Berlin and Audio Developer’s Conference in Tokyo on May 7, 2026. While still considered a Technology Preview, this next version includes some significant improvements, including: Fully-integrated audio circuitry (no need to connect to an external USB audio interface), A large LCD screen with accompanying controls for model navigation, parameter assignment, and MIDI connectivity for greater real time performance control. It will be exciting to see where Project LYDIA goes next!