INTRODUCTION findings, I will attempt to outline

INTRODUCTION
In
this seminar paper, I wish to showcase the means by which the field
of Artificial Intelligence has been applied in the creative field of
music as of recent, particularly with regards to generating musical
compositions, performance, music theory and digital sound processing.
I also wish to answer the question of what could be achieved in this
field in the near future.
First,
I will attempt to outline the basic terminologies concerning the
field of Music And Artificial Intelligence, introducing the concept
of Recombinant Music and the forms in which it is created. A few
examples of such systems in action are presented.

Secondly,
based on my research findings, I will attempt to outline the possible
future applications of Music and AI in making music creation easier
for the world at large, once more of the present limitations have
been overcome.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

AIM

The
aim of this seminar paper is to answer the thesis question: ‘What
are the ways in which AI can aid the creative process of making
music?’

RELATED
LITERATURE
Virtually
all of the literature available on the subject which was used for
this seminar paper were academic documents, which were written on
the subject by both students and professors alike around the world,
with a select few referenced from web pages on the internet. For the
purpose of keeping the contents of this seminar seminar paper recent,
the majority of the academic papers selected were written in the 21st
century, that is from the year 2000 to date. They are:

“Comparing
Artificial Intelligence Music And Human Music: A Case Study Of Prof.
David Cope’s Emmy System” by Iris Yuping Ren

“Algorithmic
Songwriting with ALYSIA” by Margareta Ackerman and David Loker

“Automated
Music Composition: An Expert Systems Approach” by John A. Dion

“AI
Methods in Algorithmic Composition: A Comprehensive Survey” by
Jose David Fernandez and Francisco Vico

“Musical
Knowledge: What can Artificial Intelligence bring to the musician?”
by Geraint Wiggins and Alan Smaill

LITERARY
RESEARCH FINDINGS
In
this section I will present the knowledge gained from study of the
literature researched on the subject in a concise manner, with the
ultimate goal of achieving the above stated aim.

Artificial
Intelligence (AI):
The field of Artificial Intelligence, or AI, focuses on the study of
simulating intelligent behaviour in machines and computers. In other
words, it involves computers and machines that can perform tasks
characteristic of human-level intelligence, with the main goal of
increasing the usefulness and application of computers to everyday
life. This includes the ability to reason and discover, to learn from
data it has been fed and adjust itself accordingly, in essence
‘training’ the computer.

There
are two main categories of AI: Cognitive Science (which deals with
the development of human-level intelligence) and Applied AI (which
deals with the development of programs exhibiting intelligent
behaviour). The study of AI is further subdivided into detailed
sub-fields such as speech processing, mathematical simulation and
reasoning, for example. The focus of this seminar paper is on the
sub-field of ‘Music and Artificial Intelligence’.

Expert
System:
Relevant to this discussion is the Expert System, which according to
The International Dictionary of Artificial Intelligence, is “an
Expert System is an information system that represents expert
knowledge for a particular problem area as a set of rules which
performs inferences when new data is entered.” Based on this
definition, musical AI systems can be said to be Expert Systems.

Digital
Signal Processing (DSP):
DSP refers to the various techniques for improving the reliability
and accuracy of digital communications. It works by clarifying the
levels of a digital signal. A DSP circuit is able to differentiate
between orderly signals and chaotic noise, and is aimed at improving
the signal-to-noise ratio of a sound or communication system. An
analogue-to-digital converter is used if the signal input is an
analogue signal, the DSP circuit processing the sound and converting
it back to analogue via a digital-to-analogue converter.
Since
this topic is concerned with digital computer data, there will be no
use for a converter, as the DSP can act directly on the signal.

Music
And Artificial Intelligence:
In the field of AI, many advances have been made in allowing
computers solve thousands of problems with specific solutions in
seconds. Solving creative problems, however, presents a new challenge
for the field of AI, as such problems don’t have a single specific
right answer. These challenges are being met head-on by music
researchers, who have come up with numerous musical AI systems to
make music composition easier for amateur and professional musicians
alike.
Regarding
music composition, the application of AI varies in a number of ways:
from using AI to generate a real-time accompaniment to a music
performance, to using AI to compose music from scratch, to AI systems
used to generate compositions from existing musical compositions. In
each case constraints of some sort have to be added to create
something truly usable and musical, as pure randomness could yield
unpredictable results. Though not without its limitations, this field
has seen AI which have been quite successful in creating music. A few
ways in which systems have been designed to simulate musical
creativity are a mathematical models, knowledge-based systems,
evolutionary methods and so on, most commonly reliant on Markov
chains. These methods have been applied to the various genres,
ranging from classical music to jazz music, pop music to folk music,
and everything in between.

Recombinant
Music:
This is a term coined by Professor David Cope of UC Santa Cruz, which
is defined as music composed by the analysis and deconstruction of
several works of a composer, seeking out common variations,
structures and themes. These are then rearranged into new musical
compositions with added variations in phrasing, key and note
selections. This is done in two forms: Syntactic Meshing and Semantic
Meshing.
SYNTACTIC
MESHING
comes in two forms which are voice hooking and texture matching.
Voice hooking is responsible for ensuring that musical sections link
together logically, done by ensuring notes between phrases are
restricted to the interval occurring between the phrases given in the
original music piece. Texture matching is responsible for ensuring
phrases are spread out in time, allowing for moving pitches by
octaves. SEMANTIC
MESHING
builds tension and resolution in a music piece using a system of five
letters (S, P, E, A, C) which respectively mean Statement,
Preparation, Extension, Anteceent and Consequent. These letters or
labels are attached to chords or phrases contained in a musical piece
to identify the current state of the musical composition.

A
FEW EXAMPLES OF MUSICAL INTELLIGENCE SYSTEMS

The
following are a few examples of what has been achieved in the field
of Music and Artificial Intelligence:

Prof.
David Cope created a system called Emmy
(or EMI, Experiments in Musical Intelligence) in LISP over a period
of 20 years which was created to emulate the music of various
legendary composers. Emmy was trained by analysing and
deconstructing a series of a composer’s musical scores input into
the system. Emmy would then create new compositions based on their
style. Presented with three pieces: one composed by Emmy, one
composed by a scientist and one composed by Bach, an audience could
not tell which was which, mistaking the scientist’s piece for the
artificial one and Emmy’s piece for the actual Bach music.

ALYSIA,
which stands for Automated Lyrical Songwriting Application, is a
co-creative songwriting system designed specifically to help both
musicians and amateurs write professional-quality music. Based upon
prediction models, ALYSIA can be used to generate many different
melody options in the style of the given lyrical phrases. Unlike
most other musical AI systems, it utilizes Random forests instead of
Markov chains.

An
AI known as LANDR
can be used to master audio tracks, an intricate and complicated
task with little to no human input. The system is built around an
adaptive engine which reacts to musical input, analysing the
production style of the track and referencing the massive number of
tracks in different genres it has learned from. Based on the created
digital fingerprint, a set of audio post-processing effects are
chosen by LANDR and the parameters adjusted for the most optimal
sound possible. Using LANDR’s algorithm, a song that could have
taken weeks to master can instead be mastered in the same night of
its creation.

Amper
is an intelligent music system that utilizes AI technology to allow
anyone create professional-level music instantly, regardless of
musical experience. All that is needed as input is the desired
musical style and mood, and Amper does the rest, generating music
synced to your content in mere seconds.

THE
FUTURE: RESEARCH AND DEVELOPMENT PROPOSITIONS
As
the field of Musical AI further develops, efforts could be made to
implement systems and techniques to enable Intelligent Musical
Systems to be able to multitask better. Already, we are at the point
where musicians can make music with commercially-available computer
tools without needing to learn an instrument, breaking the boundary
between composer and performer.
Further
advances in AI could take this a step further, opening up more
opportunities for content creators to be more efficient and
effective, with CAAC and recombinant music helping to cure creativity
block by generating ideas for humans to refine and build upon.
Amateur aspiring music composers would have the bar lowered, enabling
them to express their creative vision. In essence, the future of
music will involve the collaboration between humans and musical AI
systems becoming hit songs in any variety of genres, with listeners
finding it difficult to determine where human involvement stops and
the algorithmic composition begins.

CONCLUSION
Artificial
Intelligence is used to create systems which aid humans in solving
problems in a variety of fields. Far from being just a sub-field of
the Computer Sciences, the field of AI is so broad that its ideas,
concepts and applications are spread among more disciplines than just
the science of computers, for example mathematics, psychology and
even the creative arts.

Despite
the impressive achievements in the field of Music and Artificial
Intelligence, however, Algorithmic Composition is not without its
limitations. For example, most musical AI systems have an unclear
phrase structure, as well as a lack of musical expressiveness (that
is, unique nuances of performance, the ‘personal touch’). In
addition, the intelligent musical systems developed have been
developed using ad-hoc methods (for a specific problem or case),
leading to an over-specialization in that field. It could be said,
therefore that the machines are not truly intelligent, as they are
limited by the source material they were trained on.

The
intersection of machine learning and music appears to be somewhat
beneficial, however, if largely imperfect, and at the rate at which
the technology is being improved upon, such technologies could begin
to see widespread use in a matter of decades.