top of page

SOFTWARE

Interesting device patented in 1875 by Elisha Gray. It could be considered a musical device based on Software because is an early CODEC of electrification of Musical Tones.

Elisha Gray Electro-Harmonic Keyboard 1876.jpg
Elisha Gray Telegraph transmitting Musical Tones 1875.jpg

One year later, in 1876, and before the invention of telephone, Elisha Gray patented the Electro-Harmonic Telegraph, and it's a clear image that shows the strong link between Electro-Magnetism, Telecommunication Systems and Music (Code).

Melvin Linwood Severy, the inventor of the Choralcello (1900), the electro-magnetic piano-organ that precedes the early musical electronic instruments, has patented in 1916 the SYNTHETIC HARMONOGRAM PRODUCER, that shows a first attempt to re-produce synthetic sounds based on storage (Memory) of differents sounds (Samples). A step further from pure Musical Tones (Oscillations) to Timbres (Spectrum) musical codification. Every detail of his electro-magnetic mechanism is related with the acoustic properties of different materials  in different dimensions (1-D Resonant String, 2-D Resonant Surface, 3-D Resonant Cavity).

Melvin Severy 1916 Encoding Sound.jpg
Helmholtz Resonator by Max Kolh 1905.jpg
Melvin Severy 1916 Synthetic Harmonigram.jpg
Max Mathews.jpg

But, the origin of the Digital Music of any form that is reproduced in all the CPU-based electronic devices, could be traced back to Max Mathews in 1957 , when together with Newman Guttman were able to get out from an IBM machine, the music of The Silver Scale, a real Software-based musical piece, the first MUSIC program. 

Here starts the conceptualiztion of Digital like a computational device that operates numbers - or values-, but still, we have the original conceptualization of Digital like something that uses Fingers (Digits in Latin).  DIGITL is the Nahuatl neologism that contains both meanings.

The Silver Scale.jpg

Max Mathews allways considered the concept of Real-Time a fundamental aspect of Digital Music, and that was the most difficult challenge during the last 6 decades of digitalization of musical practices. 

In Music, Input-Output and Computational Processes needs to be done in less than 10 miliseconds. These days we have technologies capable of that.

In 1980's, Miller Puckette developed the software The Patcher, named later in honor of Max Mathews like MAX. From that moment, It was bpossible to use computers in order to control parameters of processes and musical events. Computer based Real-Time sound synthesis happened one decade later, and the paradigm was established by Miller Puckette with his project: PURE DATA, a free-software available on the web.

In a paralell and connected transit, David Zicarelli with the company Cycling'74 launched the product MAX/MSP ,  a software with a strong development during the last two decades, focused on Real-Time applications. The meaning of MSP is Music Signal Processing.

PD Pure Data.jpg
Infinite Process.jpg

Here we can see an artistic impression of the morphologic similarity between a Pure Data patch, with the lines containing and representing the music-audio digital channels, and an magnificent Church-Organ with individual trumpets (3-D resonant cavities) for every Note or musical Action. It's the relation between a pure Virtual and Numerical world  (Time doesn't exist there....) and an absolut Physical and Acoustic world (Time exist here....).

Pure Data -Organ PROTOSON.jpg
Pitch Detection Signal Routing.jpg

An Audio signal in digital domain is just a set of numbers that represents the sonic energy on a given time (a relative "time" from the moment that the sound is transduced to digital....) .  This Audio Signal is represented like a Waveform, and it's possible to analyse the Data (Numbers) and obtain variables linked with that waveform.  In Music signals, the most basic and important of these variables are the Amplitude and the Pitch.

Real-Time Pitch Detection is the basic algorithm of DIGITL, the heart of the instrument. And here it's important the actual time-period of the signal. In order to estimate the pitch we need to have a complete cycle of the periodic signal, and that's why the higher the frequency, the less-time window for analysis. The  1-D String in DIGITL  is tuned in E4, around 330 Hz, and it's possible to obtain results of pitch estimation of a Fast Fourier Transfom in less than 5 miliseconds.

Here it is an screenshot of the program Melodyne by Celemony, and it shows the application of the Pitch Correction of Audio Signal, in this case focused on human Voice signal. Today these software based technologies are known by everyone with the name: Auto-Tune.

That means that it's possible to change in Real-Time the Pitch of a signal taken care of the spectral qualities. And that's the essence of the instrument DIGITL: the transformation of the incoming signal generated by the Finger-Actions of the musician, in an unlimited recreation of new audio data.

Enough computational power is available now for real-time computation of thousands of paralell processes.

Melodyne Routing based on Pitch Detection.jpg
MAGENTA DDSP.jpg

In 2020, the Magenta project by Google, researching in AI solutions applied to Music, developed the concept DDSP - Differentiable Digital Signal Processing. The image from above shows a clear depiction of the concept: a musician in real-time playing an instrument, a  Computer running a deep-learning neural network (that after analysis for Pitch and Amplitude detection, creates a new signal that could be post-processed indefinitely, and eventually is generated a signal that will be transduced again to Sound on LoudSpeakers.

​

The image below shows a more recent development of Magenta that is called MIDI-DDSP. It's a fusion between the classic human control during musical performance, and the precise control of synthetic sound on digital domain. That is the target of the DIGITL instrument, and the software solutions that is possible to use is by definition unlimited.

​

Magenta MIDI-DDSP.jpg
DDSP Neural Network Algorithm.jpg

Here we can see the typical algorithm for this kind of applications, with the Z latent space generated after the Deep-Learning Training process.  The computation power needed here is huge, but 2020's is the decade for the development of Hardware capable of the task.

There is a concept very important for the DIGITL instrument Software: Impulse Response Signal. A Dirac's delta is generated and we obtain the reaction of a given space, the Reverberation of the Space. 

The image shows the microphonic setup on the central nave of a church, and the theory is that if we apply a Convolution (a mathematical digital process) to one signal in real-time, the output signal has the same character (Reverb) like it was played on that Space.

And here we are again with a great demand of computational power, because these procedure is intensive operation.

Impulse Response Recordings.jpg
Impulse Response FFT.jpg

This is a representation of the Short-Time Fast Fourier Tranform of an Impulse Response (that in fact is a Sound....). In this case is a duration of  4 seconds , and we can see the evolution from 0 to 4 seconds of the  Spectrum ( Frequency domain). The colour code represents the energy of a given frequency in a given time, with Red depicting high energies and Blue low energies.

These amount of data is what is needed to multiply by the incoming signal, and the reason of the huge computational demand.

But, it's possible to create with this procedure very complex processing, that connects with the concept of Reverb, but transcends it. The transformation of the signal is again unlimited by design. 

Every software needs some kind of hardware in order to run it. And because we are talking about the high demand of computation, it's good idea to use GPU's (Graphical Processor Units). The image shows the plugin Living Sky, that is a software developed by the companies GPU Audio, Mntra and Outer Echo, and being a Reverb, it's at the same time a tool for complex transformation of the audio signal.

Living Sky GPU Plugin.jpg
Kemper Amplifier simulator.jpg

Because DIGITL is an Electric Guitar in essence, it's good to point the relation with the function of Amplifiers, the fundamental unit  necessary to provide the signal with different character.  From some years, there are devices known like Amp. Simulator, like the Kemper Amp. Profiler on the image. These hardware devices (based on digital software) can transform the original signal from the String creating unlimited types of simulation of processes.

​

This has an inherent relation with the concept of Impulse Response of a system, because like in the case of a given Space, we can obtain the Impulse Response of an amplifier (or chain of effects)  and loudspeaker. With the same theory of convolution we can obtain a simulation of a chain of effects.

Pure software based solutions like the Neural Amp Modeler, a free open-source project that uses Deep Learning to create transformations of the audio signal.

Neural Amp Modeler.jpg

In DIGITL, the software-based transformation of the original audio captured by the Pickup for every String, occurs with independent processes applied to every Action, that means that for every vibration is possible to create a particular set of transformations linked with the position of the finger, after the analysis for Pitch detection and routing of the signal.  Then, every aspect of the signal can be transformed with the only limitation imposed by the computer system.

The string vibration generated by the finger Action is considered a Virtual Sound Object in the digital domain, and every Virtual Sound Object can be manipulated independently. One of the manipulations that are important in the future development of DIGITL ia the 3-d Spatial placement of the Virtual Sound Object  . 

​

​

The image shows a recent version of the software SPAT from IRCAM, the Institute involved also during the development of MAX by Miller Puckette in 1985.  Of course, Spat~ is a project developed in 1995 by Jean-Marc Jot in IRCAM, like an object inside MAX: A Spatial Processor for Musicians and Sound Engineers.  The idea is that we can manipulate the audio signal in order to create the illusion that the sonic wave has a particular origin or trajectory on the 3-D Space.

​

​

SPAT.jpg
Impulse Response in Spat R0 R1 R2 R3.jpg

Spatial location of sounds by humans is based on our ability to identify in every ear small differences. The reflections of the sound, the Reverb again, produces the identification of the procedence of the direct sound.

Here we can see that this kind of multiple and paralell processing of requires also a huge amount of computation.

Core Spat Rendering Algorithm 1 Source.jpg
Spat Source Processing.jpg
bottom of page