Friday, February 10, 2012

Fizmo Project progress report

About two years ago, I launched something called the Fizmo Project, which was an attempt to decode the Ensoniq Fizmo's patch dump format and identify where all of the parameters are. I've posted on it here before, but after working on it for a while, I gave up in frustration because identifying individual parameters in the 3097-byte-long patch dump proved to be quite a challenge. However, that wasn't the only problem; the other issue was that none of the changes that I could see in the patch dumps seemed to correlate to the actual values of the parameters in any way that made sense.

Well, last summer I finally devised a way to partially automate the process, using the Unix "tr" and "diff" tools in Mac OSX. With this, I can now compare two dumps and it will find all the differences and list them out for me. And the mystery started unraveling. Some info: The patch memory structure on the Fizmo is a bit unorthodox. It stores 64 "sounds", each of which consists of two layers. Each layer is basically one complete sound path, with a Transwave oscillator, a filter, a VCA, three envelope generators, an LFO, and some other modifier sources. Within a sound, the two layers can be overlaid or split across the keyboard. Each sound has its own name.

The Fizmo also stores 64 patches, each of which consists of a combination of the 64 sounds, plus a few other parameters. A sound can appear in more than one patch. Each patch has its own name, in addition to the names of the four sounds selected in the patch. A patch also contains a global effect; it can choose one of about 20 available effects, and each effect has its own set of parameters.

When you do a patch dump, you get a big ugly lump of 3097 bytes. There's a header area, followed by four blocks of 640 bytes, one for each sound loaded into the patch. All four sound blocks have the same layout. The parameter layout is really, really ugly. Don't listen to anybody who tells you that it is similar to the layout used by the Ensoniq MR and ZR models. Not even close. Very few parameters start or end on a byte boundary; most span across bytes, and most bytes that contain any data at all contain pieces of two parameters. There seem to be unused bits and bytes scattered about everywhere at random. Some parameters which take up seven bits go from 0-127, like you'd expect, but others only range from 0-100. The character strings which contain the patch and sound names appear to be stored in reverse order, last character first. There's a lot of dead space. I haven't yet made any effort to identify where the effects parameters are, so it's possible that a lot of the apparent dead air is taken up by them, but we'll see.

Following is a screen shot of a file that I'm building to describe the dump layout in detail:



This file is available on my Web site here. It's an Open Office spreadsheet. Open Office is support on both Windows and Mac OSX platforms; you can download it for free from openoffice.org. The row number is the number of the byte in the dump, where the sysex beginning-of-exclusive byte (F0) is byte 1. Column A lists which of the four sounds the parameter belongs to, if it is specific to a sound. Column B shows which parameter or parameters appear in the byte, and which bits. The rightmost bit is bit 0, and the leftmost bit is bit 6 (remember, only 7 bits are available in a MIDI data byte). Column C provides any additional notes, such as parameter range or bit encoding. The next seven columns with the color-coded blocks will match up (in most cases) with the colors used in the text, to show which bits in the byte each parameter is using. Looking at the screen grab above, you can see how much empty space there is, and how many parameters wrap across the byte boundaries.

For those who didn't know, the Fizmo has a huge number of parameters which are not editable (or even viewable) from its own panel; you have to use an external editing device to see or change them. Given that there are about 200 parameters for each sound, the only practical way to do this is to use a computer editor. However, the only editor that has ever been available for the Fizmo is a bastardized version of Sounddiver that Ensoniq made available to Fizmo buyers, before Creative Labs (the company that bought out Ensoniq) shut them down. My experience has been that this version of Sounddiver is incredibly buggy; it never quite does the same thing twice, and nearly every time I run it, I have to try to figure out what set of magic incantations is required that day in order to get it to basically work. The librarian features are untrustworthy and I've trashed many of the patches on my Fizmo in the process of doing this investigation (fortunately, I didn't have anything on it that I particularly wanted to keep).

Given all this, there is strong motivation to work towards creating an open-source editor. That's one reason I'm doing this project. The Fizmo has the potential to be a very powerful synth, but most of its power can't be unlocked until there is a reliable and supported patch editor for it. I'm maintaining a thread concerning the project on VSE; I'd like to get some people interested in starting to write some code.

The main things remaining to do with the patch dump are: (1) figure out how the parameter values work for the parameters that specify wave selecting and offsets into the wave table (they appear to be memory addresses, but I haven't poked at it much yet), and (2) start working out the locations of the effects parameters. After that, although it isn't an absolute necessity for a basic patch editor, I'd like to start documenting the single-parameter sysex messages that Sounddiver uses to transmit individual parameter edits to the synth.

Sunday, January 8, 2012

Realizing the Realizer

This post was inspired by Big City Music's announcement this week that they are offering a PPG Realizer for sale. The Realizer, as many synth enthusiasts know, was the legendary attempt by Wolfgang Palm's PPG company to pioneer the soft synth concept using 1980s technology. However, the development costs became the final straw for the financially struggling PPG, and led to the company liquidating itself in 1988. The Realizer never went into production; rumor has it that two prototypes were built. (If this is true, I don't know where the other one is; I've checked the listings of the Audities Foundation, the New England Synthesizer Museum, and the Eboard Museum. None of them list a Realizer, but that doesn't mean they don't have one in their collection somewhere.


Realizer control desk -- from ThePPGs.com

So what was the Realizer actually? Modern legend has it that it was the first virtual analog synthesizer. Actually, however, it was both more than that, and possibly less than that. According to Palm, the Realizer was an early attempt to build what we now call a "workstation"; it would have been capable of synthesis, multi-track recording, processing, and mixing. The unit you usually see in pictures, which PPG called the "control desk", is not the whole system; it's only the control unit and user interface. The control desk could interface with a combination of sound modules that actually did the audio processing, and hard disk units (HDUs) which provided audio and data storage, up to 8 units total. The sound modules and HDUs were intended to be somewhat mix-and-match with other PPG products, such as the Waveterm B.



The sound module was the piece of the Realizer that did most of the work. Like many of today's digital synths, it contained a bank of digital signal processor ICs -- in this case, eight of the TMS 32010 -- and a Motorola 68010 that managed everything. (The HDUs also contained a pair of 32010s, but it appears that they were not used in the Realizer configuration.) PPG developed three synthesis packages that ran on the sound modules. The one that everyone talks about in regard to the Realizer was the "Minimoog" virtual analog software that reproduced not only the features of the Mini, but also its panel layout. However, there was also application software for additive synthesis and for the sampling and wave scanning method of synthesis that PPG was know for. Further, according to Palm, the software allowed individual processing functions to be mixed and combined in the style of a modular synth. (You could say that PPG invented Reaktor in 1986!)


LinkPPG HDU with stand-alone control unit (in foreground). From Synthmuseum.com

Further, the Realizer also incorporated the functions of what we now call a digital audio workstation. It was capable of multi-track recording and editing using the HDUs as storage. It had "plug-ins" for adding common studio effects (although, according to Palm, the reverb software was troublesome and was never completed to a satisfactory degree). And it could do mixdowns, producing a digital master that could presumably be transferred directly to CD mastering systems, although it is not clear how that would have actually worked.

The control desk is the part of the Realizer that everyone has seen. It contains a monitor (which was to have been color in the production version, but the prototypes had green-phosphor monochrome), 31 knobs, 6 faders, a data entry wheel, a keypad with numeric and function keys, and a graphics tablet. Each of the knobs and faders had line graphics drawn on the panel from the control to the edge of the screen opening; the software generated lines on the screen that led from the screen edge to the parameters on the screen, and by that method, the user got a visual indication of which knob or fader controlled what on a given screen. The graphics pen would have been used for drawing, and probably also for selecting commands from menus, in the fashion of the Waveterm. The best photo I've been able to find that shows a screen and illustrates the layout and the knob graphics is the following, taken from Palm's old blog on Myspace. Palm describes this photo as being a photo of an early mock-up, which accounts for the crude-looking panel.




So what made the Realizer so expensive to develop, enough that it killed the company? One thing that Palm mentions is that in the timeframe when the Realizer (and presumably the Waveterm B also) was being designed, PPG had decided that laying out the circuit boards by hand, which was how they had done all of their previous products, wasn't going to work given the density of the boards they wanted to design. So they purchased an electrical CAD software package (which would have been very expensive in 1986), and then leased a computer system to run it. Unfortunately, apparently the leased computer didn't have sufficient performance, and the system ran very slowly, harming productivity. There probably would also have been a learning curve for the engineers. Palm also mentions, in his account of the Waveterm B development, that the Waveterm B and Realizer were PPG's first products to use 16-bit sampling, and that they had a hard time getting their 16-bit analog-to-digital converters working properly. (In order to build the initial sample library to be shipped with the Waveterm B, they hacked a Sony F1 digital tape system and used its converters.)

However, I'm guessing that the real killer was the software. Even though there was probably some commonality with PPG's other products, there would have been a ton of new software to be written. They had to write a lot of new software for the control desk displays and user interfaces; the Moog emulation and the additive synthesis was new, and they had implemented a lot of improvements to the sampling which required new software. (Plus, knowing how things were done back in the day, the 12-bit sample handling software used in the Wave keyboards probably did a lot of "tricks" with unused data bits, which would have had to be re-written to handle 16-bit samples.) The sound modules required a bunch of new software to manage all of the DSPs, not to mention the actual DSP processing code. Also: they were writing all of this in assembly language. Palm states that a C compiler was not available for their systems at the time -- a statement which puzzles me, since C compilers were readily available for most processor families by 1986, and Sun, to name one, had one for the 68000-series CPUs.

The final factor was that, due to a combination of circumstances, PPG found themselves without a lot of money to spend on R&D. In 1986 they had invested money in moving production to a larger facility, in part due to robust sales of the Wave, but at about the time the new factory opened, Wave sales began to decline. The Wave, especially by the time of the 2.3 revision, had very sophisticated capabilities for wave scanning and manipulation of samples, but a lot of users didn't care about that -- they only wanted basic sampling and playback, or just playback of canned sample libraries, and so they gravitated towards less expensive samplers like the Emulator or the Ensoniq Mirage. The Wave was PPG's bread-and-butter product, so when sales declined, the company's revenues suffered. A few Waveterm B's were sold, and a few HDUs were sold as stand-alone products, but the Realizer never reached production before Palm and the other founders realized that they were going to run out of money. They liquidated rather than continue and be forced into bankruptcy.

Going back to the sale by Big City Music, I don't know exactly what it is that they are selling. The ad shows only the control desk. Although that item no doubt has significant collection value by itself, the point remains that if the goal of a buyer is to get the system actually running again, it won't do anything without at least one sound module and one HDU. Perhaps Big City has these items and is including them in the sale; the ad copy doesn't say.

Below is a link to a 1987 demo, from Palm's Myspace page. The Realizer's control desk can be seen on the left for much of the video. (The device that the demonstrator is holding appears to be a stand-alone control for an HDU, and not part of the Realizer configuration.) Note that the demonstrator never touches the Realizer control desk, which suggests that the software was still not stable at this point.

http://mediaservices.myspace.com/services/media/embed.aspx/m=,mr=60628615,t=1,mt=video

Finishing up with a historical curiosity: The astute observer may have noticed that in the photo of the control desk, following the first paragraph of this post, the desk does not have the data entry wheel. I don't know if this implies that prototypes were built both with and without the data wheel, or if it was added to the pictured unit after the photo was taken.

Sunday, January 1, 2012

Thoughts on panel graphics

Happy New Year to synthesists everywhere! Looking back at the last year, I see that for a while I've been focusing almost exclusively on gear. That was not really my intent. I'll let you in on a little secret: One of the reasons I created this blog was to use it, in a sense, as my own notebook; a lot of the things that I post are things that I want to be able to refer back to myself. For instance, since I did the detailed description of the MOTM-650 MIDI-to-CV interface back in 2010, I've referred back to it several times when I needed to sort out some parameter or other.

However, I'm not doing this just for myself. If I was, I'd just keep a notes file on my computer, and not bother with a blog. My New Year's resolution, in regard to Sequence 15, is to share thoughts of all sorts with regard to synthesis and electronic music. Just blogging about gear is too limiting, and accounts for the dearth of posts.

So here goes... In the ongoing discussion about the differences in the user community between modular synth performers who prefer the Eurorack format, and the performers who prefer the "5U" formats (MOTM, Dotcom, and Modcan-A), one thing that's often debated is the style of panel graphics commonly seen on Eurorack modules, versus the usual style of 5U modules. Let's compare: Here is a (rather murky) photo of my Encore Universal Event Generator:





By comparison, here, from Analogue Haven's Web site, is a photo of the Makenoise Maths:



The Encore UEG clearly maintains the tradition of the vintage Moog modulars: white graphics on a flat back background, and controls in neat rows and columns. (Further, it follows the MOTM format convention of putting the I/O jacks at the bottom, although it doesn't really conform to the whole MOTM standard grid due to the large number of knobs.) All the controls and jacks are labeled in a clean font, and the panel has index marks for the knobs. Line graphics are used to indicate associations between controls. Most MOTM, Dotcom, and Modcan-A format modules follow this pattern; in the world of 5U, Modcan's B-series modules (which are MOTM format) are considered a bit radical for having black graphics on a white background. There have been a few other makers of large-format panels who have used colored text and line graphics, but even they tend to stick to the black background and standard fonts.

Now let's compare with the Maths. White background with a red border around the edge of the panel. (And that's considered conservative in Euro-land.) Zig-zaggy graphics that show the flow of signal through the module. There's four input jacks; they are at the top of the module, and you have to read the manual to realize that they are the four jacks pointed at by the small arrows. Knobs and jacks scattered hither and yon, although the panel is symmetrical. (It has two processing channels; the two outside ones do basically the same thing, and the same goes for the two inside ones.) Functions of some of the jacks are indicated only by the signal flow graphics. You have to look rather closely to see the little math operator symbols that label some of the controls. The knobs don't have any indexing, and there are two illuminated pushbutton switches whose purpose is not indicated at all. And I don't know where the hell Makenoise came up with that font; maybe they made it themselves.

If that sounds like I'm ragging on Makenoise, I'm not intending to be. If you go to Makenoise's Web site and look through the descriptions of their modules, you realize that Makenoise has its way of doing things, and once you've studied it and gotten into that groove, most of those panel markings make intuitive sense. Where you start to run into problems in Euro-land is when you realize that the Makenoise way of doing things is not the same as the Harvestman way of doing things, which is not the same as the WMD way of doing things, etc.

Euro users put up with this, in part because it looks cool. But I think there is also more of an aesthetic in the Euro world of being more willing to patch something up, turn some knobs, and see what happens, where in the 5U world, users tend to want things to be more precise (or "anal" if you prefer). This is just a general statement based on anecdotal data; it certainly doesn't apply universally. However, I do note that there are a few small makers in the Euro world who are willing to silkscreen something on a panel that has nothing to do with a panel's function, or just leave a panel blank; almost no one in the 5U world would ever do that. I do note that even in Euro land there has been a bit of a reaction to some of the more excessive panel designs. Pittsburgh Modular makes a wry comment on it with their 1960s-embossed-label aesthetic.

Other aspects of small vs. large format have been discussed to death already: 5U takes up a lot more space; Euro/Frac knobs are too small for large fingers, 5U panels cost more to make, 3.5mm jacks break off too easily, etc. However, I think there's one other, very practical concern. It's been noted that 5U users tend to be, on average, older than Euro users. Here's the other reason us 5U guys like things nice and clean: when we look at something like the Maths panel above, we can't see the panel! Our eyes aren't as good as they used to be. If we had a Maths, we'd have to get a magnifying glass out every time we wanted to use it. Panels like that give us headaches. We have to stick with nice high-contrast panels with clean labeling that we can see.

And besides, we like the laboratory-test-equipment aesthetic. Our moms all say it reminds them of their father's ham radio gear.

Wednesday, December 28, 2011

Washington

A new Statescape... named Washington, and done entirely with the Solaris. More at the Web page.

Monday, November 14, 2011

Solaris demo, and a few comments/corrections from John Bowen

First of all, John Bowen has sent along some corrections and additional notes to my overview posts. (I warned everyone that it was a quickn-dirty...) John's notes:

The mixers allow each input to be modulated separately, plus the output can be modulated. So each mixer allows five different modulation inputs. I should have realized that; I guess I assumed that "ModSrc1:", "ModSrc2", etc., referred to the four mixers rather than the four inputs of the selected mixer.

The S/H waveform of the multimode oscillator is tunable and can track the keyboard; it isn't just a low pass filtered noise.

Samples cannot be loaded via the USB interface in the current version of the OS. To get new samples into the system, what you have to do is get a CF flash card reader connected to your computer (they're cheap), pull the CF card from the Solaris, put it in your reader, and then move the sample files to the card. Then, you plug the card back into your synth. I'll write some more about samples and sample loading after I've had a chance to experiment with it.

I forgot to edit something before I published it about signal routing to the FX channels, and as a result I got that part wrong. The four VCAs mix down into a fifth VCA, which you can't control directly, but VCA 6 is connected to it. When you select "synth" as the input to an FX channel, it is taking its input from the fifth VCA.

The overdrive part of the Minimoog filter algorithm was actually moved from the filter to the VCA section, which allows you to use overdrive with any filter. It's the VCA "boost" parameter (which I missed when I read that in the manual).

I mentioned some anomalous behavior in the velocity and aftertouch, which at the time I was writing that post, I had confirmed with MIDI Monitor. Well, guess what: the next day, it was behaving normally with the same patches. So I'm not sure how I did that. Possibly I messed up the sensitivity settings when I was playing with the system parameters (which, fortunately, I didn't save).

The envelope follower can be used with any signal source, not just the external inputs. I knew that, but the way I wrote it may have given the wrong impression.

I should have mentioned that the default routing of the mod wheel to the LFO 5 amount to pitch can be disabled. Then, LFO 5 is output at a constant amount set on the main page 2. You can always route mod wheel anywhere you want, just like any other modulation source.

I made a comment about the joystick on previous vector synths: John says that the Prophet VS and Korg Wavestation (neither of which I've ever had my hands on) were not capable of memorizing joystick movements. So I guess that's a characteristic that was limited, among vector synths, to the Yamaha SY77/TG33 (which John says he didn't work on). I have a TG33 and I know it does that.

Finally, John wrote me some good stuff about his history with Sequential Circuits and Creamware. He wants it to be known that he was not the author of Scope -- it was already written when he went to work for Creamware. John gave me some great info, and with his permission, I'll summarize it in a future post.

Now, finally, the Solaris demo. Note that this was also kind of a quick-n-dirty; there is some parameter tweaking, but it's all based on factory patches. There are four parts: in part 1, I demonstrate a patch that uses a rotor, and demonstrate some of the effects you can get by varying the rotor frequency. There's a quick demo of using the main screen and patch list to select patches, and then part 2 which demonstrates the different filter types. Part 3 is a quick demo of a patch that uses the arpeggiator, and part 4 demonstrates the ribbon controller.





Monday, October 31, 2011

Solaris Architecture, part 2: Control Sources

In Part 1, we looked at the audio signal generators, processors, and routing. In this part, we'll look at the control signal sources. To recap, these are:
  • Six envelope generators
  • One looping envelope generator
  • Four lag processors
  • Arpeggiator
  • Step sequencer
  • One envelope follower
  • Velocity and aftertouch
  • Performance devices: Joystick, ribbon controller, assignable buttons, and assignable knobs
  • Expression pedal input jack
  • MIDI continuous controllers
The six envelope generators are of the DADSR (delay-attack-decay-sustain-release) type. EG 6 is hardwired to the final VCA, although any of the VCAs can be routed to other destinations. The minimum time for any segment is 0.1 milliseconds and the maximum is 20 seconds. A useful feature is the ability to slope the sustain segment, instead of it having it be a constant level. The time of each segment can be modulated by velocity, key tracking, mod wheel, or one of four MIDI continuous controllers. The looping envelope is actually a universal event generator; in fact it is quite similar to the Encore Electronics UEG that I reviewed last year.

The lag processors do what you expect; you can route other signals through them to smooth transitions. (You don't have to use the lag processors to produce key portamento; each oscillator has its own glide settings.) The envelope follower produces a control signal proportional to a signal fed to it (presumably from an external input).

The Solaris is unusual in that it contains both an arpeggiator and a step sequencer. I have not played with either one very much yet. The arpeggiator can use internal clock or sync to MIDI clock. It does the usual up, down, up/down, random, and as-played patterns. The sequencer is a four-row, 16-step sequencer with variable step lengths. and it can be routed to any destination -- it isn't tied to oscillator frequency. Both have the ability to use stored patterns that the user can create, but the software to edit the patterns is not finished yet.

The keyboard generates velocity, release velocity, and aftertouch. There are scaling and offset parameters which can be stored in each patch, which apparently are capable of making velocity and aftertouch do some rather strange things. According to the manual, the Solaris will receive and respond to polyphonic aftertouch, although it will not generate it. The keyboard has the usual pitch and modulation wheels to the left; the pitch wheel is spring loaded while the mod wheel is not. The pitch wheel does not appear to be routeable to any parameter other than oscillator and rotor pitch. The mod wheel is defaulted to control the amount of LFO 5 that is routed to oscillator and rotor pitch, but it can be routed to other destinations.

The Solaris has an array of performance controls besides the pitch and mod wheels, the most notable of which is the ribbon controller that runs the span of the keyboard. In my tests with it, I found the ribbon controller to be very smooth and glitch-free. It can be configured so that the point where you first touch it becomes the zero point, in the style of the much-vaunted ribbon controller on the CS80. It can also be configured to hold its last value.

There are buttons to turn the arpeggiator and sequencer on/off, and a "hold" button that does what a sustain pedal does, except that it is latching; once you press the button, you can take your hands off the keyboard and it will keep playing. There are two assignable buttons that can send a constant value to a modulation destination. One of the preset pages on the large window allows the bottom row of knobs underneath the window to be used as assignable knobs. And there is a jack for plugging an expression pedal.

In the MIDI Setup, there are five controller designations labeled CC1-CC5. You can assign any MIDI continuous controller number to these, and then they can be routed to any modulation destination. Finally, it appears that every patch parameter is accessible via the MIDI NPRN mechanism, although it is possible that not all will respond in real time -- it would take a long time to try every possible value.

This wraps up the quick overview of the Solaris architecture. It's quite possible that I will later find out that some of what I've written is wrong; I'll make corrections in future posts. I'm getting a lot of requests for a demo, so I'll get that up in a day or two.

Saturday, October 29, 2011

Solaris architecture, part 1: Signal sources and routing

Here's a quick look at the Solaris architecture. At this point, I'm still studying the manual and experimenting with the synth, so my understanding is incomplete and some of what I say here may be subject to correction later. So be aware of that. Nonetheless, here goes.

At first glance, the voice architecture of the Solaris appears to be a basic four-layer setup with conventional oscillator-filter-amplifier chains. However, the routing is far more flexible than that: components can be swapped back and forth between layers, components can appear in more than one layer at a time, and feedback loops between layers are possible. Strange as it may sound, the best way to understand the voice architecture is to start in the middle, with the mixers.

There are four mixers, each of which has four signal inputs and two control inputs. The mixers serve the purpose of combining up to four signal inputs, and also act as VCAs under the control of the two control inputs. On the input side of the mixers are all of the signal sources:
  • Four oscillators
  • One white noise source
  • One pink noise source
  • Two vector processors
  • Two rotors
  • Two AM processors
  • External inputs
  • Four mixer outputs
  • Four VCA outputs
  • Five LFOs
  • Six envelope generators
  • Four lag processors
  • Arpeggiator
  • Step sequencer
  • One envelope follower
  • Velocity and aftertouch
  • Performance devices: Joystick, ribbon controller, assignable buttons, and assignable knobs
  • Expression pedal input jack
  • MIDI continuous controllers
A few things to note here. The first is that mixers can process audio signals, control signals, or any combination. Mixer outputs can in fact be routed as control signals back to various places. The second is that feedback is possible: a mixer's output can be routed back to itself, or to other mixers.

On the output side of the mixers are these signal destinations:
  • Four filters
  • Four insert effects slots
  • Four VCAs
The possible routings on this side of the mixer are: An insert effect can be before or after a filter, and either the filter or the insert effect can be the input to the VCA.

Audio Signal Sources

Let's look at the input sources. The mixers can accept either audio or low frequency sources, but I'll do the audio sources first. The original source of all audio signals within the synth (other than external inputs) is the four oscillators. You can choose from six different implementations for each oscillator. Two of them are analog emulations, a Minimoog and a Curtis VCO (I presume this means the CEM 3340); they offer the same waveforms as the originals. The multimode "MM1" offers all of the standard synth waveforms, plus some continuously morphable waves a la the EML 1o1, proper white noise (computed, not a sample playback), a low-frequency rumbling noise referred to as "S/H", and a supersaw that appears in the display as "Jaws". There's a set of single-cycle waveforms taken from the Prophet VS, a set of wavetables (which can be scanned using the "shape" parameter) from the Waldorf Microwave, and a sample playback mode into which you can load your own samples via the synth's USB interface. An oscillator can accept up to four modulation sources (which can be low or audio frequency, and includes pretty much every signal source in the synth), and each modulation source can be routed to modulate frequency, shape (the effect of which depends on the mode and waveform selected), or linear FM.

The white and pink noise sources do what you expect. Note that they are both computed rather than sampled, which means that they sound the same no matter which note you play, and there is no clocking noise.

The vector processors emulate the vector synthesis method used on the Prophet VS, Korg Wavestation and Yamaha SY77. The vector is basically a four-way mixer, with two sources at each end of an X axis and two more at the ends of a Y axis. (What they don't have is the ability to memorize a manual joystick movement and store it with the patch, which the synths named above do have. However, you could do this via an external sequencer.) By default, the vector inputs are tied to the panel joystick, but you can route any modulation parameter to either axis. The rotors are a variation on the vector synthesis idea: imagine a vector synthesis machine with a motor tied to the joystick, capable of making it move in a circle at audio rates. That's what the rotors do, crossfade between the four sources in a circular pattern. This amounts to a form of audio-rate wave scanning. The rotors track the keyboard (or not, if you switch it off), and otherwise act like oscillators.

The AM processors implement several possible amplitude modulation techniques, with one of the choices being "standard" AM, and another choice being ring modulation. I haven't played with this much yet and I don't yet understand all of the algorithms or parameters. Any signal sources can be selected as the carrier or modulation.

There are six external inputs -- four analog input jacks, and the left and right channels of the SPDIF input. They can be routed to audio and control destinations the same way that internal signal sources can. The mixer outputs can also be routed back to the mixer inputs. The Solaris makes no attempt to prevent feedback loops from being created, and in fact feedback loops can be used in patches.

Filters and Effects

The mixers output to a chain that consists of "enable part" switches, insert effects, filters, VCAs, and effects channels. Signal routing on this side of the mixers is more limited; any signal can be routed to the input of a filter, but the insert FX can only accept input from the mixer or filter of the same number (e.g., insert FX 1 can only accept input from mixer 1 or filter 1); the VCA can only accept input from the insert FX or the filter of the same number, and an FX channel can only accept input from the VCA of the same number, or from another FX channel or external input. (It does appear that clever use of the external outputs and inputs could get around some of these limitations, but I haven't tried that yet.) The "enable part" buttons, when turned off, cut off the output of the corresponding mixer to whatever comes after it, but they do not cut off the mixer from places where it has been routed to a modulation input.

The filters and insert effects come first, and can be placed in either order by means of selecting their respective inputs. (They can also be fed back to the mixer inputs.) The filters, like the oscillators, have several selectable implementations. The "MM1" multimode filter is the most versatile; it is a 4-pole filter that allows a number of pole combinations of low pass, high pass, bandpass, and band reject, in the style of the Oberheim Xpander. The "SSM" type emulates the 4-pole, SSM low pass filter as was implemented in the Rev 2 Prophet-5. The "Mini" type emulates the Moog 4-pole low pass transistor ladder filter, including its distortion and overdrive characteristics. "Obie" emulates the 2-pole filter as was implemented in the Oberheim SEM and other early Oberheim models; it is switchable to low pass, high pass, band pass, or band reject modes. The "Comb" fitler generates a comb-filter response as produced by a flanger or cardboard-tube echo; there are two variants. The "Vocal" filter produces vowel formants, and can be varied between vowel sounds. The insert FX are all waveform modification effects. The "Decimator" reduces the sample rate of the signal; I haven't tried it yet. The "BitChop" is a bit crusher, and there's a soft clipping distortion.

The next bit, I'm a bit confused about. The four VCAs each accept signal input from the corresponding filter or insert FX. There are two modulation inputs, one for level and one for pan, or they can be cut off which leaves the VCA wide open. The reason this doesn't result in an infinite sustain is that, apparently, all of the VCAs sum down to a fifth VCA which is hard-wired to envelope generator 6, and it provides the master control over the output. The effects channels then accept input from the master VCA, or from another FX channel or external input. The reason I say I'm not clear about this is because it's not quite what the manual shows, but I think it's correct, and you'll see why in a moment.

Each FX channel has four FX slots, each of which can hold one effect. The four available effects are the chorus/flanger, phaser, delay, and EQ. (They are all stereo.) The effects are "pooled" such that a given effect can only appear in one slot, in one FX channel, in a patch. Contrary to the manual, these appear to be after the master VCA, and here's why I say that: I played with the delay parameters and found that the maximum delay time is a whopping 20 seconds. And... if you set a long delay and then play some notes, the delay will continue to sound until the echoes die out, long after the master VCA has shut off. Statescape time! (And in fact, I'm already thinking about doing that... have to build a clever patch for it...)

Finally, there are five pairs of stereo output channels, four analog pairs and one SPDIF pair. Each FX channel can be routed to one pair, or the "dry" output of the master VCA can be routed out. This, for example, would let you send a dry output to an external effect, and a chorused output directly to your mixer or DAW.

Finally: I have found what appear to be a few bugs (not unexpected since the OS is version 1.0). I managed to crash the Solaris by twiddling the knobs under the large screen while I was on the second patch-store page (the one where you name the patch). Don't do that. Also, the keyboard velocity does not seem to work, and the aftertouch is odd -- it only outputs values of 0 or 63.

There's a lot more to the architecture; I haven't touched on the modulation sources yet. More in part 2.