The above image came from Midjourney when asking it to show a digram illustrating the relationship between music and AI. If you look closely, you’ll see it’s a lot further away than you think. That said, Artificial intelligence (AI) has the potential to revolutionize the music industry by enabling the creation of new and unique compositions that were previously unimaginable. However, the use of AI to generate music also raises a number of social and legal issues that must be carefully considered. Let’s start with the basics.
What is AI and what does that mean?
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and act like humans. These intelligent machines can be trained to perform a wide range of tasks, from recognizing patterns and making decisions to translating languages and driving cars. While AI has the potential to improve efficiency and make our lives easier, it is also important to carefully consider the ethical implications of using AI and to ensure that it is designed and used responsibly. Some people may be afraid of AI because they are unsure of how it will impact society and the economy, or because they are concerned about the loss of privacy or control. It is important to address these concerns and to work towards the responsible development and use of AI.
There are of course many interpretations and the definition above was actually written by an AI model called Chat GPT in response to the prompt, “Could you explain AI in one paragraph that takes into consideration people’s fear about it.”
What models are being used in music?
Let’s start with the different types of AI models used for music. This is not the entire list, but the top 5 that are the most popular.
- Generative Adversarial Networks (GANs): These are a type of neural network that can learn to generate new, original content by competing against each other. GANs have been used to generate music, as well as images, text, and other types of media.
- Recurrent Neural Networks (RNNs): These are a type of neural network that are particularly well-suited to processing sequential data, such as music. RNNs have been used to generate music in a variety of styles and genres.
- Evolutionary algorithms: These algorithms use principles of natural evolution, such as selection and reproduction, to generate new musical compositions. They have been used to create both simple and complex pieces of music.
- Markov models: These models use statistical techniques to predict the next event in a sequence, based on the events that have occurred previously. Markov models have been used to generate simple melodies and chord progressions.
- Rule-based systems: These systems use pre-defined rules and procedures to generate music. They can be used to produce music in a particular style or to implement specific musical concepts.
It’s worth noting that these AI models are often used in combination with each other or with human input, and that many other AI models and approaches have also been used for music generation.
Online services that you can start using right now
So now that we know how some of these typically work, here are five popular AI-powered music generation websites:
- Amper Music: This website uses AI to create custom music tracks based on user input. Users can specify the style, tempo, and length of the track, and Amper will generate a unique piece of music.
- Jukedeck: This website used AI to create original music tracks based on user input. Users could choose the style, mood, and length of the track, and Jukedeck will generate a unique piece of music. TikTok recently acquired this and is currently offline with no info on when it might be back.
- AIVA (Artificial Intelligence Virtual Artist): This website uses AI to compose original music in a variety of styles. Users can request specific types of music, and AIVA will generate a custom track based on their input.
- Mubert: This website uses AI to generate endless streams of original music in real-time. Users can choose the genre and mood of the music, and Mubert will generate a continuous, personalized soundtrack.
- Melodrive: This website uses AI to generate music tracks based on user input. Mostly targeting gamers, you can specify the style, length, and structure of the track, and Melodrive will generate a unique piece of music.
It’s worth noting that these websites offer a range of features and capabilities, and the specific music generation tools and techniques they use may vary and new tools are being added on the daily.
What are the implications?
Like with any new technology there are implications. For example, AI will take jobs, but someone once said that about tractors and the reality is yes, some jobs will go away, but a lot of new jobs will be created. In fact, learning how to interact with an AI is not as easy as you think. Remember that it doesn’t have the input of facial expressions, tone of voice, body language, and context. It only has your words so being concise is not just an art form, it’s becoming a new job called, “Being a prompt engineer.” That means you understand the language syntax that AI prefers to speak in” which gets you the best result. It’s like the old saying, “garbage in, garbage out”. The same could be said about google searching when you think about it.
So how will it effect music production? There are several implications when it comes to music:
- One of the most pressing social issues related to AI-generated music is the potential for it to replace human musicians and composers.
- In it’s infancy, it’s so much better than anyone expected, it will be almost impossible to compete on a “production” level in probably a year or less
- Copyright gets very messy in music and other fields. For example, like samples… AI derives its results from a montage of sources that are likely copyrighted, but almost undetectable, unlike music samples.
- Who owns the music? Is it the prompt engineer? It has already been established legally, that currently, an AI cannot own intellectual property
- Because AI is born from a derivative of data, there’s a concern that music will become less diverse over time and succumb to being mostly derivative and cycling old into new
- If you’re an artist who needs help with production, you will likely be faced with a choice of hiring one person with their own unique experience and perspective, or asking an AI that has the information and experience of every producer they have the knowledge of. I use the doctor metaphor all the time, “if you were diagnosed with a disease, would you settle for 1 or 2 human opinions on a cure, or a computer that has access to all of the medical information known to man for a tenth the cost and diagnosis in seconds?”
Despite these challenges, the use of AI to generate music is likely to continue to grow in popularity in the coming years. As the technology improves, it is expected that AI will be able to produce increasingly complex and sophisticated compositions that are more closely aligned with human-generated music. However, it will be important for the music industry and lawmakers to carefully consider the social and legal implications of this technology, and to ensure that it is used in a way that is fair and equitable for all stakeholders.