Attempts to generate music on a computer have a long history. The first computer synthesized music was realized in 1950 on an Australian CSRIAC computer. Later, a series of programming languages called MUSIC-N, developed by Max Mathews in 1957, became the basis for computer-generated sound synthesis. However, computers at that time did not have the ability to play music in real time. The program calculates the waveform over time, writes the result to a sound file, and then records the mixed sound over time on a tape to make the final music work. It was very time consuming.
Only in the 1980s, by installing a board dedicated to sound processing on a computer, it will be possible to synthesize sound in real time. The ISPW board, developed and sold by IRCAM in France in the 1980s, enabled real-time sound synthesis and processing by attaching it to a NeXT computer. Computers can now process sound in real time, enabling computer-based “live electronics” that generates sound in real time and modulates the sound of live musical instruments during live performances. However, at that time, this board was very expensive and sold only a few, and it was a privilege that was only available to researchers belonging to studios and research institutions specializing in computer music, and composers requested from it.