Prism Roadmap & Feature Requests

Input Modulation

As of 0.9.4 beta, audio input modulation can only be used to modulate the light output within Prism. There are plenty of options to modulate audio within DAWs, but we recognized the desire to modulate full audio tracks with a unified set of controls directly from Prism.

This has been on our internal wish list as well, and we’ve already been experimenting with with evolving this concept within Prism, with compelling results.

We plan to implement this within either an additional beta release or simply include it with the imminent commercial release.

Input Modulation will migrate from the Light editor to its own section/tab within Prism, and will have an expanded set of options, including its own waveform modulator with waveform select, modulation depth, stereo phase, gain, and the ability to modulate each separate color with varying amounts, if desired.

As illustrated in the modulation waveform of the image above, the Stereo Phase parameter of all modulation waveforms in Prism will now visualize the phase difference between the left and right channels using a waveform line rendering (above we can see a 90 degree phase difference).

FX Version

In addition to a more robust input modulation section, we are planning on releasing an FX version of the plug-in. This would differ from the virtual instrument version of Prism in two ways:

  • The FX version will acquire its input modulation source directly from the audio track it is placed on in the host DAW, eliminating the need for sidechain routing.
  • The FX version will lack signal source routing to submix outputs and will only use the default main stereo output (a limitation of effect plug-ins vs instruments)

You will be able to use Prism solely as an audio rate LFO effect on any audio track in your DAW through a configuration in Prism Settings.

Standalone Version

We do already have a very simple standalone version of Prism. It’s great for getting up and running quickly with minimal headache, and live experimentation. However, it currently lacks sequencing capabilities (timeline and parameter automation, etc.).

If there is clear demand, we plan to release a professional Prism Studio version that includes a full featured standalone app with full sequencing functionality, render to disk, and asset packs for use in creating a variety of different experiences. Prism Studio would replace the DAW, eliminating the reliance on any 3rd party software requirements.


We are re-imagining how Prism will work with MIDI input and we look forward to sharing our progress with everyone in future updates.

We’ve had some interest in being able to drive frequency relationships with incoming MIDI musical frequency, and we’ve already begun to explore this concept by controlling Prism as an entrainment instrument.

Feature Requests

Please reply to this post with any additional feature requests and let us know how you are using Prism.

1 Like

I just discovered this great tool Prism.

As music producer midi would be really nice to design the light more individual.

  • each color or channel could have its own midi note
  • velocity could be the color channel gain

I am using Audio Strobe. So f.e C4 could be the left channel and D4 would be the right channel.

Is something like this on the roadmap?

Hi ivo, and welcome!

We actually started with a MIDI workflow, with each MIDI note storing a separate “state” value for all the Prism parameters combined…so you could dial in the final result for each MIDI note and the plugin would animated between those parameter values as you played each note (either through live input or via a MIDI composition).

However, this ended up being confusing for the standard workflow of the brainwave entrainment community, who wanted to focus on animating Automation Parameters / Envelopes. Additionally, the MIDI workflow proved problematic when scrubbing the timeline, as it was essentially non-deterministic. MIDI notes only know when they’ve been triggered on and off, so it wasn’t possible to accurately represent in-between states of notes while scrubbing the timeline, if that makes sense?

We’d like to bring back a similar workflow in the future, but likely through a standalone version that gives us more control to offer these features outside of the DAW.

In the meantime, we’ve experimented with using MIDI to drive frequency relationships between Prism and the MIDI note’s associated frequency. This is all a bit difficult to go into detail in a message here, but we’ll create a video preview to share with the community when we’re ready to experiment with MIDI integration again.

ok. thanks.
I am sure for the community these ideas are great.
But for my purpose a pitty since I design AVE Session together with the music.
Also for the sound & vibration midi notes would be perfect in my case.

Btw how do I control the left and the right light channel for Audiostrobe?
Can I control them separately in one Prism Plugin or do I need to use 2 Prims as a left and right?

the audio input meter doesn’t show the input signal on logic pro although it seems to work. I used the sidechain input of my audio file in logic.

And the signal flow also seems to work without being seen on the audio meter when you mute the audio input channel.

hi! not sure what the future or current state of this (lovely) plugin is! I have three thoughts:

– it would be very convenient to have some sort of RGB picker option that would allow users to find colors without having to manually balance the three RGB channels via percentage

– as was mentioned elsewhere in the forum, it would be nice if the ‘pitch’ parameter within the sound tab wasn’t capped from 100 to 500hz

– it would be nice (though this might more difficult to implement) if there was a deeper ability to adjust the left | right eyes individually of eachother and stereo phase…

Regardless, thank you so much for creating an efficient way to develop custom avs sessions!

Thanks for your interest and taking time to share your feedback of the plug-in, we really appreciate it.

This is something we’ve actually thought about and prototyped already. A color picker works reasonably well when none of the parameters that control color are automated within the project. However, we don’t anticipate that many users will pick a single color for the entirey of their composition, but will rather automate changes to the color parameters over time.

The issue we ran into with the concept is that when the color parameters are automated within the host, they will immediately override any color picker value (which is confusing), and audio plug-ins are not allowed to insert automation parameters values within the host, which is the only way we could make the color picker values stick. The best we could do is create a color picker that simply converts the color into a 0-100% red, green, and blue value as a reference, and you would have to apply these values manually. It would be a pretty simple “nicety” at this point, but let us know if this is still something you would value.

We’ve thought about this a bit, and we would like to open up the range a bit in future versions. There are however a few issues we still need to balance to make this possible.

First, we feel as the pitch of the tones increases, they generally become less pleasant, and we are unsure of how many users would actually use the higher frequency range for audio tones. Frequencies below 100 Hz can be generated with the Vibration tab.

Second, this means that less of the parameter/slider range would be dedicated to the (more often used) lower frequencies. This lowers the effective editing resolution of lower frequencies when using the sliders or automation parameters with a mouse. We want to be confident that the majority of users would utilize the expanded range before we modify it.

Third, research by Gerald Oster on binaural beats and their detection in the brain indicated that 440 Hz is the ideal carrier tone frequency, and the ability to detect binaural beats diminishes as frequency increases, and detection wanes completely in most people after 1000 Hz. ( We are unsure how this affects isochronic tones.

Fourth, in order to increase the frequency range of the oscillators in Prism beyond their current range, we would have to increase the complexity of our bandlimiting technique to prevent aliasing that would result in inaccurate harmonics, which would interfere with the AudioStrobe/SpectraStrobe tones. This type of aliasing will cause glitches in visual output.

We’re definitely not opposed to the range increase, but these are the issues we need to solve first. As a workaround, you could try generating (bandlimited) tones outside of Prism, and routing them through Prism’s Aux. Input for modulation (using it as an external isochronic tone).

Do you mean you would like the full set of Prism controls, except separated per eye, as opposed to shared between the eyes (discounting stereo phase)?

There is a workaround here, although it is a little clunky. You can achieve this using 3 separate instances of Prism for SpectraStrobe, or 2 instances for AudioStrobe. Each instance is placed on its own audio track within the DAW. For SpectraStrobe, use one instance to generate the SpectraStrobe reference tone, and on the other two instances, in Prism settings disable the reference tone (so it doesn’t double up), and then pan one instance’s track hard left, and the other instance’s track hard right. For AudioStrobe, follow the same approach, except you don’t need the instance for the reference tone, as AudioStrobe does not require a reference tone.

In terms of the state of the plug-in: we’ve been busy behind the scenes, and have some exciting updates to hopefully reveal soon, which will include an update Prism as well as a few other annoucements.