The 62nd Annual Grammy Awards promises to be one of the highest-rated TV shows of 2020 (although it is guaranteed to have lower ratings than past shows). It will certainly be a top 10–rated non-sports show, and it has a chance to be a top 50–rated show in the US by pure audience size and across all genres.
In 2030, who – or what – will be eligible to win a Grammy? What will the categories evolve into? Will music need to be recorded at all?
In 2020, you can win a Grammy in over 80 categories. You will be familiar with awards such as “Best Pop Solo Performance” or “Best Rap Performance.” You will be less familiar with categories such as “Best Immersive Audio Album” or “Best Instrumental Composition.” The Grammys are “a celebration of excellence, the music community’s highest honor, and its only peer-based award.”
In every case, the peer-reviewed awards are given to human beings who use tools to make music. (Yes, musical instruments are tools and, while we’re on the subject, “technology” is a fancy word for tool.) The best musical tool users are awarded Grammys.
What will happen when the tools make music by themselves?
At CES 2020, Samsung introduced Neon, an AI-based companion that is being developed to be indistinguishable from a human companion. AI models are composing at a pretty high level right now. It won’t be long before most production music (background music, music for breaks in and out of segments, and other utility music) will be fully produced by AI. We’re only moments away from synthetic artists and superstars. We’re only a few months (maybe a year or two) away from completely artificial artists (not virtual, artificial — see Neon above).
The production music composed by AI will be suboptimal at first, but it will become better over time, and it will be adopted very, very quickly. Why? Economics will drive the decision-making.
The Evolution of Production Music
When it was hard to use (distributed on vinyl records with a paper catalog describing what each song was), so-called canned music, music that is pre-recorded with the idea that someone might need it someday, was always considered suboptimal. In the 1990s, vast catalogs of canned music were made available in high-quality digital formats. Because they were files, they were quickly searchable. The higher convenience and lower price changed everything. Budgets for original compositions plummeted and have never returned to previous levels. AI will have the exact same impact on all current production music business models. Get ready: the business is going to take another 10x revenue hit.
That’s on the composition and production side of the business. What about on the engineering, mixing, and mastering side?
Mastering Engineers Go First
Mastering engineers will be the first to succumb to AI. Mastering plug-ins are already used by everyone – even mastering engineers. Within a year or so, AI models will be able to understand the genre of music, analyze best practices “Grammy Award–winning” mixes from the past, and mimic the sonic quality and sonority of the best recordings. There is no way a human will be able to compete – in time, price, or subjective quality. In practice, most run-of-the-mill mastering will be done when you select the distribution channel, press “Save,” and upload.
Recording Engineer Follow Quickly
What about engineering? Mic placement is important. So is gain staging. So is balancing a section. So is proper stem organization. So is… Yep, all can be modeled in AI and in most cases fixed after the fact by algorithms written to learn how to fix the issues. You didn’t quite mic the snare correctly. The stereo pair was placed too close to the bass strings on the piano. The late reflections from the snare drum made the mix sound muddy. Today it’s an issue. Very soon it won’t matter. In a world full of AI engineers, “Fix it in the mix” is going to take on an entirely new meaning.
New Musical Categories for a New World
In 2030, what will be Grammy eligible? Not who – what? Will an AI model win a Grammy for “Best Immersive Audio Album,” or “Best Instrumental Composition?” More than half of the music engineered and produced for those categories will be completely created by AI. Who will own that music? Who will get royalties? Will AI models be proprietary or commoditized or both? What if you mash-up a bunch of AI-generated music and then another AI is “inspired” by your creation and creates a work that is considered subjectively or objectively better?
Today, with social media, music is more personal and fractionalized than ever, which makes music the perfect candidate for AI-assisted classification and categorization.
In 2030, we will live in a world of data-driven personalization. We will expect everything to be personalized — especially our music. Will the classifications R&B or Hip Hop or Pop or Jazz have meaning? What will the AI-generated musical genre categories of 2030 look like? Who will name them? Will there be a few hundred subcategories of R&B that have clear geographic and demographic clusters? Will we need personalized Grammy Awards bestowed by a peer group of AI models to the individual (or collective) AI models to celebrate “excellence”?
It may not happen like this, but some version of this is going to happen
Mark the date of this article. Today, you think it is science fiction and the rantings of a lunatic. Maybe. But I work with some of the world’s best data scientists on AI models that are all too quickly changing the world. Tonight will be music’s biggest night of 2020. I’m here to celebrate, and I’m bringing my AI-coworker (an AI that works with me to compose, produce and engineer my projects) with me to the show. I want it to listen and learn.
Take the Survey
If the survey is not visible, click here.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.r