Why is everyone so afraid of tantalum capacitors? Lately I see posts in various places from people who are anxious to go through all of their synths and rip out any tantalum caps that they find. Completely unnecessary, provided that the person who designed the circuit that the tantalum cap is in was designed by a competent engineer, and that it was installed properly. There's a few rules that you need to be aware if you are going to use a tantalum cap in a synth (or any other kind of circuit), but if you do it right, tantalums have significant advantages over electrolytic types.
Solid tantalum capacitors. The loose capacitors at left are 10 uF; the caps fastened to the tape at right are 1 uF. Note the penny for size comparison.
First, let's talk a bit about how tantalums work. Tantalum capacitors have something in common with electrolytics: in both cases, the capacitor dielectric consists of the oxide of the metal that one of the electrodes is made of. In the case of the electrolytic capacitor, it's a layer of aluminum oxide that forms when the electrolyte reacts with the aluminum anode under an electric charge. The electrolyte is necessary to maintain the oxide layer. When it eventually evaporates or leaks out of the capacitor's container, the oxide layer breaks down and the capacitor loses capacitance.
Most tantalums used to be made the same way; these were called "wet slug" types. The ones that fail inside '70s ARP synths are wet slugs. The problem with these is that, because tantalum is pretty unreactive compared to aluminum, strong acids have to be used as electrolytes. These eventually eat up the seals around where the leads penetrate the body, and the electrolyte gradually leaks out. However, at some point, someone figured out that they could use a process where the tantalum "slug" is dipped in an electrolyte at the factory before the capacitor is assembled, and connected to a voltage to form the tantalum oxide dielectric layer. Then, the slug is removed, dried out, and assembled into a capacitor; the electrolyte is used only at the factory. Because tantalum oxide is more stable under electric charge, the electrolyte isn't necessary to maintain it. These are the "solid tantalum" or "dry slug" types. There are several different designs used in solid tantalums, but they all use the same basic idea -- a tantalum slug forms the anode; a layer of tantalum oxide built up on it is the dielectric, and some material (often manganese dioxide) is bound onto the outside of the oxide layer to form the cathode. Thus there is no electrolyte inside the assembled capacitor; since there is nothing to leak out, the capacitor will not degrade or lose value.
Cutaway drawing of a surface-mount solid tantalum capacitor. From an NEC data sheet.
As it turns out, tantalum oxide has a very high dielectric constant. This means that a tantalum capacitor can use a very thin layer of oxide, only a few microns. The basic rule of capacitors is that the capacitance value is directly proportional to the surface area and inversely proportional to how far apart the electrodes are; since the electrodes in a tantalum cap are separated only by the thin oxide layer, a tantalum cap can pack a lot of capacitance into a small volume.
The problem with having an exceedingly thin dielectric layer is that it can't withstand high voltages; it doesn't have enough insulation value. Thus one of the first rules for dealing with tantalum caps: do not expose them to excessive voltages. Most references I've seen recommend that tantalums be derated to 50% of the rated voltage. Some say to go down to 30% when the tantalum is used in a low-impedence, high-power circuit. Inexpensive tantalums are usually rated for 25-50V, with higher voltage ratings becoming harder to obtain above 10 uF.
The second thing about tantalums is something that they are notorious for: exploding. Why does this happen? Two words: reverse voltage. The polarity of a tantalum cap must be respected. Any reverse voltage breaks down the dielectric layer. It reduces the oxide back to metallic tantalum, which then forms a short-circuit path; if the circuit the capacitor is in can supply a lot of current, the short path heats up rapidly and the capacitor goes boom. The third point is related: ripple current. Large amounts of ripple have a similar effect: they create localized heat spots in the dielectric, which breaks down at those spots and a short circuit results. So it is generally best to avoid using tantalum caps in applications where they will be exposed to high ripple current. (Nonetheless, some power supply makers use tantalums as filter caps in their supplies, and it works. How they get this to work, I don't know.)
So why use tantalums? For one thing, they are small and light, much smaller than electrolytic caps of the same rating. Second, solid tantalums can't leak because they contain no electrolyte. "So what", you might say. In a synthesizer, does it matter if the capacitors are a few grams lighter? Probably not. No electrolyte is an advantage, but many synths are loaded with electrolytic caps and they seldom leak, and even if they do, it's usually not that big a deal.
Well, there's another, very good reason to prefer a tantalum capacitor over an electrolytic. Consider:
This is an interface circuit that I built for a home automation system. It's basically a 7555-based monostable timer circuit that, when activated, holds a relay open for a fixed amount of time. The little yellow blob at bottom center, just above the red connector with the blue and orange wires going to it, is a .68 uF tantalum capacitor. When I prototyped this circuit, I used an electrolytic capacitor of the same rated value. When I fired it up to check it out, the actual value of the cap as determined by the circuit's time constant worked out to about .35 uF -- unsatisfactory, because the timer needed to hold for 600 milliseconds and I was only getting about half of that. Electrolytic capacitors are not noted for being precision devices; the .35 actual that I got from the cap marked at .68 is within the typical tolerance for electrolytic caps, which are often specified as being -50%, +100%. And, electrolytics will lose value over time as the electrolyte evaporates; it isn't unusual to pull one out of a circuit after several years and find that it is only operating at 10% of its marked value.
Standard tantalums, on the other hand, are specified at plus or minus 10%, and you can get tighter tolerances at a somewhat higher price point. And solid tantalums will retain their operating value over a long period of time. (Wet-slug tantalums are still made for special applications, but don't waste your time with them.) The advantages for synthesizer circuits should be obvious: when the calibration of, say, a VCO or a VCF depends on a capacitor circuit, tantalums will make the circuit closer to the center of the calibration range at aseembly, and will retain their value over time, reducing the need for recalibration. That's why I went with a tantalum when I built the board above; the .68 uF marked cap got me as close to the 600 millisecond time value as I could easily measure (50 ms or so) without trimming.
The one other thing about tantalums: there is no such thing as a non-polarized tantalum cap. So keeping in mind that you need to respect the polarity, a bit of care is required in design. Do that right, though, and you'll have a more stable and reliable circuit. After all, tantalums are considered reliable enough by the aerospace industry that they are heavily used in aviation and spacecraft. So don't be afraid of the tantalum.
This is where we start to get into the more complex of the basic electronics components. Transistors are really the basis of modern electronics. It was the transistor, and its subsequent micro-miniaturization into integrated circuits, that really kicked off the electronics revolution in the latter half of the 20th century.
There are a number of different basic types of transistors. In this installment I'm only going to cover the bipolar transistor, which is still the most commonly used type. You are also likely at some point to run across another type, called the field effect transistor (e.g., JFET, MOSFET). Those work on a different physical principle and they have different characteristics. I'll cover them in a future installment.
What Does a Transistor Do?
Basically, a transistor is a device that controls a flow of current. This is the first thing to understand: the bipolar transistor is not a voltage regulating device; it's a current regulating device. A transistor has three terminals: current flows from one terminal to the other, under the influence of the third terminal. A small current flowing into or out of the controlling terminal controls a much larger flowing between the other two terminals. This is what allows the transistor to act as an amplifier: a small current can control a much larger one.
The three pins of a bipolar transistor are referred to as the emitter, the base, and the collector. The base is the control terminal; a small current at the base controls a larger current flowing between the emitter and the collector. Which direction these current flows go in depends on which of the two flavors of bipolar transistor you are dealing with; the two are known as PNP and NPN, based on their construction. If you remember the discussion from the diode installment of this series, a diode is a junction between two types of semiconductor, known as P-type and N-type. Same thing here: the PNP and NPN designations indicate what types of semiconductor material the transistors are made from, and by implication, which direction current can flow through them.
At this point we will introduce the symbology for transistors. The two types are denoted as:
The pin coming out to the left side of the symbol, perpendicular to the little bar, indicates the base terminal. The little arrow indicates the emitter, and of course the remaining pin is the collector. Note that in the symbol for a bipolar transistor, the collector and emitter are always portrayed as meeting the little bar at a 45-degree angle. If they aren't, then the symbol is for some type of FET transistor, not a bipolar.
As you may have figured, the direction of the arrow indicates whether the transistor is a PNP or NPN type. But what does the direction actually mean? This is where we introduce the first principle of the bipolar transistor: the junction between the emitter and the base acts as a diode. And, in accordance with the base pin being the middle pin in the transistor symbol, the middle letter in "PNP" and "NPN" indicates the type of the base; either of the other two letters can be regarded as representing the emitter. So, in a PNP transistor, the emitter is a P-type and the base is an N-type. If you remember from the diode discussion, this indicates that current can only flow from the emitter to the base, not the other way around. Accordingly, the arrow on the PNP symbol is pointing inward, to indicate that current goes in through the emitter. The opposite is true for an NPN transistor; since the base is P-type, current can only flow from the base to the emitter. So the emitter has to be a current exit, and so its arrow points out.
Do not, at this point, start confusing the current-in or current-out state of the emitter with where you actually obtain the amplified output signal! In both the PNP and NPN types, you can tap off of either the emitter or the collector to get the output, depending on the circuit design. We'll cover this further down. The arrow only indicates which direction current is flowing through the transistor.
So what about the base-collector junction? Does it also act as a diode? Well, no. Why not? To be honest, I don't know enough about the physics to give you a complete answer, but here's a simplified explanation: The base region of the transistor is very, very thin compared to the collector and emitter regions. So when the electron is attracted to the base region due to an opposite charge leaving the base, when it hits this region, instead of taking a 90-degree turn and heading laterally towards the base pin, it's easier to just pass through it to the other side. The collector region is "doped" differently so that it has much less of a tendency to pull charges away from the junction area when it is reverse-biased. So, although some electrons or "holes" go get gathered up by the base, most of them just go on through. That's what makes the transistor an amplifier. This is far from being a satisfactory explanation, but for a better answer, I'll have to refer you to someone who knows more about semiconductor physics than I do.
Transistor Circuit Basics
So, let's see what we've got so far. We have three layers of semiconductor material, with two junctions between layers of opposite types, one of which acts like a diode and one of which doesn't. For purpose of discussion, let's assume for a while that we are talking about an NPN diode. Now let's apply some voltages to some of the terminals. We'll start by applying 12 volts to the collector, and grounding the base and the emitter.
What happens? Nothing. Why not? Remember, the P-N junction between the base and the emitter has all of the properties of a diode -- including diode drop. Because of the diode drop, and the fact that we're talking about silicon, no current can flow from the base to the emitter until the base is at least 0.6V higher than the emitter. So now let's apply, say, 3V from a battery to the base.
Now what happens? Probably, the transistor burns up! Why? Because we're not thinking in the transistor's terms. The transistor isn't a voltage controlling device; it's a current controlling device. When we connected the battery to the base, we gave it an almost infinite source of current. The input impedence of the base of a bipolar transistor is very low. So right away a large current went into the base; this in effect threw the transistor "wide open" and a whole bunch of current flowed from the collector to the emitter. Even running wide open, the transistor has a small amount of resistance, and like any resistive device, there is a limit to how much heat it can dissipate.
So, after clearing the smoke and installing another transistor, let's re-think this and try again. The voltage into the base, as long as it's at least 0.6V above the emitter, doesn't really matter. What matters is the current going into the base. So let's put a potentiometer in between the base battery and the transistor base terminal. We'll also put in an ammeter in between the pot and the transistor base, and we'll put another one on the transistor's collector. I've prepared three short videos to illustrate some of thes principles; they use the circuit below:
We turn on the power and start turning the pot slowly clockwise, as in this first video:
What do we see now? At some pretty small value of current going into the base, we'll also see current starting to flow into the collector. As we increase the base current, the collector current will increase in proportion (up to a point, which I'll describe in a moment). The transistor is amplifying the current: as we put a current N into the base, we get a larger current X*N going into the collector. What is the value of X? That is, by definition, the transistor's current gain. When discussing bipolar transistors, the current gain is called the beta. (You may also see the terminology "Hfe".) The beta is a property of the design and construction of the transistor, pertubated somewhat by manufacturing variation. For the typical commodity small-signal transistors that we commonly deal with, the beta will generally be in the 50-250 range depending on the specific type. Fancier transistors, particularly power types, have gains ranging from the hundreds up to about 1000. The transistor used in the demo video has a beta of 200. So, for instance, when we adjust our pot for 50 uA into the base, we see 10 mA going into the collector. What if we used a PNP transistor? Well, we'd have to swap the voltages at the collector and the emitter, and apply a negative voltage so as to draw current out of the base rather than putting current in. But the operating principles are exactly the same. Only the arithmetic signs change.
Question at this point: what is the current coming out of the emitter? Well, all current that goes into the transistor has to come out somewhere. We have current coming in at both the collector and the base, and the only place where current is coming out is at the emitter. So it stands to reason that the emitter current is the collector current plus the base current: 10 mA + 50 uA = 10.05 mA. And if we had a third (accurate) ammeter on the emitter, we would see that this is true. Much of the time, when you are figuring currents through a transistor, you can disregard the base current and figure that the collector current equals the emitter current, since the base current is always a small fraction (according to the transistor's beta). Next question: what is the voltage at the base? Answer: it depends on the voltage at the emitter. As long as the transistor is flowing current, and isn't saturated ( in a moment), it will always maintain the diode-drop relationship between the base and the emitter, so if we are interested in the base voltage, we can always look at the emitter voltage, and the base voltage will be about 0.6V more than that. But the transistor is not really sensitive to the voltage at the base -- it's the current at the base that matters. You will often have to figure out the value of a resistor to put in series with the base.
Which brings up a point: you must always ensure that both the collector-emitter and the base-emitter currents are limited somehow. As we discussed earlier, the transistor will willingly pass enough current to blow up both itself and components connected to it. One way to limit the collector current, as shown by our example circuit, is to put the load (in this case, just a resistor) in series with the collector. However, you can also put the load in series with the emitter -- but the circuit will behave somewhat differently. Check this second video, in which we add resistance to the emitter load; as the resistance goes up, the base voltage goes up. This occurs because the pot plus the current-limiting resistance at the collector effectively puts the transistor in the middle of a voltage divider. As the voltage at the emitter goes up, the base voltage has to go up in order to maintain the diode-drop relationship.
Transistor Performance and Characteristics
Transistor performance is often shown in the form of a characteristic curve which plots collector current vs. collector-emitter voltage, for various values of base current. I don't find these to be terribly useful; the main thing they illustrate is that as long as the collector-emitter voltage is sufficiently large (typically 1-3V), the collector current depends only on the base current -- it doesn't depend on the collector-emitter voltage. But we knew that already. What we really want to see, for educational purposes, is a chart that details base current vs. emitter current. Here's a rough one that I assembled from taking measurements on the transistor I used in the video above. The green area indicates where the transistor is behaving linearly (constant beta), and the red areas indicate where it is non-linear. Bear in mind that the upper red area, in particular, depends considerably on the circuit that the transistor is installed in.
This illustrates a couple of things. The red "corner" at the bottom left is where the transistor is approaching cutoff; as in the case of the diode, this is where the base-emitter junction voltage is less than the diode drop and it is very non-linear. When the base voltage drops to or below the emitter voltage, the transistor will completely stop carrying current, except for a small leakage current which, for smaller transistors, is usually on the order of a few nanoamps. The other interesting red bit is the flattening out of curve as the collector current approaches 20 mA. This occurs as the transistor is approaching saturation -- a condition where it cannot carry any more current because there are no more charge carriers available at the junctions. Why are there no more charge carriers available? Because, in the case of the video above, the circuit that the transistor is in can't source any more current. Note that this is not a limitation of the transistor itself; it's a limitation of the circuit that the transistor is in, and specifically the resistance that I put in series with the collector. The 500 ohms resistance between the battery positive and the collector limits the maximum possible current to about 20 mA.
In between the cutoff and saturation regions is the active region, shown by the green portion of the graph. In the active region, the beta remains nearly constant, and so the transistor's response to base current is nicely linear, as shown in the center portion of the above graph. When you are using the transistor as an amplifier, this is what you want. On the other hand, the cutoff and saturation regions have their uses too.
Transistor Applications: Switching vs. Amplification
The two most common uses for a transistor are switching and amplification. Let's cover the switching case first since it is easier to understand. In a switching application, we want the transistor to, at any given time, be in one of two states. The "off" state obviously corresponds to the transistor's cutoff region: we kill the base current, and the collector current goes to near zero. The "on" state can be a little trickier. It's common to operate a switching transistor in saturation; in this state, it has a very low resistance (provided that the collector current isn't excessive). A common rule of thumb for switching circuit design is to figure out what base current is needed to achieve the necessary collector current, and then double it. However, in doing so, you have to make sure that the transistor's limits for base current and power dissipation aren't exceeded. Here's an example of a common use for a transistor circuit: the transistor is being used to drive a relay. (The relay could, for example, be switching a high-voltage circuit on or off. A typical example in the audio world would be that the relay might be controlling the high-voltage plate supply in a tube amplifier.) Assume the following:
The relay coil has an impedence of 250 ohms
It takes 20 mA of current through the coil to make the relay close
The transistor has a beta of 100
The supply voltage is 12V
When the switch is closed, the base resistor feeds current into the base of the transistor. This is where the fun starts: how do you analyze this to know how much base current you need? Start by doing an Ohm's Law calculation: in order to to push 20 mA through the 250-ohm coil, you need to apply at least 5V to it. So we know it's doable, provided that our 12V power supply can source at least 20 mA of current without drooping. Now, in order to pass 20 mA of current through the transistor, how much base current do we need? Dividing that current by the beta gives us 0.2 mA, or 200 µA, of base current. If the switch is also connected to the 12V supply, then in order to put 0.2 mA into the base, we need a 60K ohms resistor in series with the base. (Note in this case that because of the base-emitter diode drop, the voltage at the base will need to be at least 5.6V if the relay coil is in series with the emitter. If we only had 5V being supplied to the switch, that won't work; the transistor will remain cut off. If this is the case, you have two options: (1) supply a higher voltage to the switch, or (2) put the relay coil in series with the collector and ground the emitter.)
Now, the thing about transistors is, the beta of specific units varies a lot from unit to unit across a given part number. So if our transistor is spec'ed for a nominal beta of 100, the beta of the particular unit we have might only be 70 or so. There are two approaches to this. One is that we can test the individual transistor that we put into the circuit and make sure its beta is at least 100. However, if we are manufacturing in quantity, it will take extra time to do the testing, and cost more because we'll have to discard some percentage of the transistors that we buy. The other approach is to provide for the minimum expected beta of a particular unit. In a switching application, this is pretty easy; if we reduce the base resistor to 30K, then the transistor beta can be as low as 50 and it will still work, but the base current is still low enough to not cause damage.
Well, assuming that your transistor is operating in the active region, it's pretty easy: choose the resistor that results in 2 mA of base current, and you'll get 200 mA out. But how do you know what resistor value gets you 2 mA of base current? And how do you know for sure that you are going to be operating in the active region? After all, that depends on the collector-emitter current.
Historically, though, amplification has been the most common use for transistors. This is not quite so true in audio applications today; opamps have taken over most of the functions of amplification within audio devices because they are easier to design with. In audio-frequency applications, discrete transistors remain mostly in power amplifiers. Nonetheless, we'll look at two methods of using transistors as amplifiers.
The first type of circuit uses a single transistor. As you might guess, the first problem that one faces when using a transistor to amplify alternating signals is that the transistor can only flow current in one direction, which means that if you just feed an AC signal into the base, at best only half of the signal will get amplified -- the other half will be absent from the output, because the opposite-polarity voltage applied to the base drives the transistor into cutoff. The way to solve this problem is to bias the input signal -- that is, add an offset current to it so that the transistor stays within the active region throughout the input signal range. This drawing illustrates a common way of doing this:
This is an example of a single-transistor amplifier driving a loudspeaker. The resistors obviously constitute a voltage divider, but it can also be thought of as a current divider. It sets the "operating point" of the transistor: the amount of current that enters the base when the input signal is quiescent, which is proportional to the voltage drop in the middle of the divider. Assuming that the input signal will on average be symmetric, the operating point needs to be at the midpoint of the transistor's active region. The input also needs to be scaled, per the transistor's approximate beta, so that a maximum input signal will not drive the transistor into saturation, and a maximum negative signal won't drive it into cutoff. Since the output will have a DC offset, the usual practice is to couple it into the following circuit via a DC-blocking capacitor (not shown here).
This single-transistor circuit is pretty common in circuits which handle small currents, but for large-current circuits, there are better methods. The two-transistor "totem pole" is a commonly used circuit; it is nearly universal in audio power amplifiers. In this configuration, a PNP and an NPN transistor are used together, with either their collectors or their emitters tied together, and the output taken from that node point between the two. One transistor amplifies the positive half of the signal; the other transistor amplifies the negative half. In Class A operation, the transistors have their operating points set so that neither of them ever goes into cutoff. A given level of input signal causes one transistor to conduct more current while the the other one conducts less; the difference is what appears as the output signal. When the input signal is quiescent, an equal current flows through both transistors. Well designed Class A amps are noted for their low-distortion operation, but they are very inefficient at low signal levels because a lot of "wasted" current is passing through the two transistors.
In Class B operation, the transistors have their operating points set so that they both enter the cutoff region precisely at the point where the input signal is zero. Note that some bias is still required in order to overcome the base-emitter diode drop; otherwise, there would be a "hole" in the response at very low input levels. It's harder to design a Class B amplifier for low distortion due to the fact that the transistors are approaching the non-linear cutoff region, but a Class B amp is far more efficient than a Class A -- at a zero input level, the Class B amp draws little current. There exist designs that are hybrid of these two concepts, known as "Class AB"; they set the operating points so that one transistor cuts off when the other is well into its active region. These are a compromise between distortion and efficiency.
Emitter vs. Collector Output
There are two basic ways of obtaining the output signal from the transistor, which we've already touched on, but they need to be described more explicitly. One is the common emitter configuration; in this configuration the emitter is connected to ground or to a power supply, and the output signal is obtained at the collector. If the output signal is something that requires a lot of current and has some resistance, such as a relay coil, the easiest way to do this is to put the load in series with the collector.
However, this configuration can also be used to derive a voltage signal from the input, provided that the load doesn't draw much current. The way to do this is to put a resistor in series with the collector (which you often will do anyway, to limit the maximum current) and then tap a point between the resistor and the collector. This effectively makes the transistor behave as a variable resistor in a voltage divider, and the output comes from the middle of the divider, as such:
Note a couple of things about this. The first is that the signal output is inverted from the input; the output voltage will be at its maximum when the base current is zero, and vice versa. The second thing to note is that the swing of the output voltage is determined by the maximum collector-emitter voltage, and so it can be of higher voltage than the input to the base. So in this mode, the transistor is capable of voltage gain in addition to current gain. This is illustrated by the video below:
The other configuration is usually called the emitter follower. In this configuration, the signal is taken from the emitter:
You can, if you are careful, design an emitter follower circuit with no current-limiting resistors. This configuration provides the maximum possible power within the limits of the transistor, which is why it is frequently used in power amplifiers.
Darlington Pair
You can use a pair (or more) of transistors to obtain higher gain, by driving the base of a second transmitter from the emitter of the first one, like so:
This is known as a Darlington pair. Where the name comes from, I don't know; I've often wondered if it has anything to do with the notorious speedway in South Carolina. The total beta for the Darlington pair is B1 x B2, where B1 and B1 are the betas of the individual transistors. As you can see, you can obtain quite high gain levels this way. Darlington pairs can be purchased as pre-made parts, or you can of course make your own from individual transitors.
Other Useful Transistor Circuits
You may recall that back in the diodes chapter, we presented a simple regulated power supply circuit controlled by a zener diode. The problem with that circuit is that the zener itself has to dissipate all of the power not drawn by the load at any moment. Here's an improved version:
In this circuit, the zener only has to dissipate the very small base current. The regulated voltage will always be 0.6V less then the zener's voltage rating, thanks to the B-E diode drop. The transistor has to pass all of the power supply current, but you can get power transistors that can handle substantial current.
The next circuit is called a current mirror, and it has a number of uses:
The way it works is that the resistor on the left allows a specific amount of collector current to flow through the transistor on the left, and its connection to both of the bases will cause the same amount of current to flow through the transistor on the right. This means that a fixed amount of current will flow through the load (within the limts of the power supply). The two transistors must be of the same part number and be matched -- that is, tested to make sure they have the same beta value. The circuit is useful for any situation where a constant-current supply is needed. Replacing the resistor on the left with a varying load will cause the two loads to always see the same amount of current.
A similar-looking circuit is the differential amplifier:
This circuit amplifies the difference between the inverting and the non-inverting input. If both inputs change by the same amount, the output does not change. This is frequently used in pro audio applications where "balanced" lines are used, e.g., to send microphone signals long distances to a mixing console. Noise that tries to enter the line will enter both inputs in equal amounts (called "common mode" noise) and the amp will reject it. It is also frequently used in sophisticated amplifier circuits where negative feedback is used to stabilize the circuit.
Transistor Numbering, and Packaging
Three main packaging styles are used for transistors these days. "Small signal" transistors are generally found in what is know as a "TO-92" package:
It's a small cylindrical plastic package, nearly always black. One side is flat for orientation, and circuit board designers often use a flat-sided circle on the board silkscreen to indicate which way the transistor is to be inserted. Generally, as you look at the flat side, the order of the pins is emitter/base/collector, but not always. So be sure to check the data sheet for the specific type that you have. Usually, the part number will be printed on the flat side. Not pictured is the TO-18 metal can, which is the same size as the TO-92 but does not have the flat side. A small piece of metal sticks out from the edge of the can to identify the emitter pin. The TO-18 version is not much used anymore, but very common in electronic devices made prior to 1985 or so.
Somewhat higher-power transistors will usually be found in a "TO-220" package:
As in the case of the TO-92 package, the order of the pins may vary. The metal tab is intended to be mechanically fastened to a heat sink, to increase the current carrying capacity of the transistor. The tab is often electrically connected to one of the pins, usually the collector, so electrical insulation may be required between the tab and whatever it is fastened to. As usual, check the data sheet.
Serious power transistors come in a "TO-3" case:
Photo from Wikipedia Commons
These are most commonly found as output transistors in audio and RF power amplifiers, and switching transistors in solid-state power controls. Not visible in the photo are two pins that stick out from the bottom side, which are the base and emitter pins; the case itself is the collector. These are made to be inserted into a special socket, and they are usually in contact with a live heat sink (insulated from ground) which is also part of the collector circuit. Transitors this large have a number of non-ideal properties, such as high leakage current and parasitic capacitance.
There are three main systems for part numbering of transistors. As in the case of diodes, there is a JEDEC system for transistor numbering. While diode JEDEC numbers all start with "1N", transistors start with "2N". This will be followed a a three- or four-digit number, and possibly additional letters to identify variants. In general, higher numbers are more recent designs.
Many transistors made in the Far East use the Japanese Industrial Standard (JIS) system. In this system, numbers for PNP types start with "2SA" or "2SB", and NPN types start with "2SC" or "2SD". A European system called Pro-Electron also exists. In this system, the first character is "B" to indicate a silicon-based device. A second letter of "C" or "D" indicates an audio-frequency transistor; "F" or "L" indicates a radio-frequency transistor, and "U" indicates a power switching transistor. Here is a good Web page that summarizes these systems.
Transistor Conventions and Standard Parts
You may have noticed that all of the circuits I've presented in this post have all used NPN type transistors. There is some history for that: in the 1960s when transistors first came into use, for reasons not clear to me, NPN types were a lot easier to make -- and therefore a lot cheaper -- than PNP types with equivalent characteristics. For that reason, designers figured out how to build many common circuits using only NPN types. This persists somewhat in electrical engineering; in situations when the choice between designing a circuit to use NPN or PNP transistors is an arbitrary choice, designers will usually go the NPN route. A side effect of this is that when engineers discuss the characteristics of various transistors, they will generally talk about the NPN type as being the standard, and then just note whether or not an equivalent PNP type exists. PNP types aren't often discussed for their own sake.
In synthesis circuits, we don't typically deal with large currents, so we usually stick to small-signal transistors. (This will change if you get into building amplifiers.) Doing a quick survey of small-signal transistors used in the most common published synth-DIY circuits, I note these three commonly used part numbers:
* BC547. A very useful and inexpensive NPN small-signal transistor, with a maximum collector-emitter voltage of 45V and maximum collector current of 100 mA. A Fairchild data sheet specifies beta as being in the range 110-220 for the "A" version; there are "B" and "C" versions with higher beta ranges at somewhat higher prices. As of this writing, Mouser has the "A" version listed for a whopping five cents USD apiece, quantity 10. They come in the TO-92 plastic package. The BC557 is a complementary PNP type.
* 2N3904. A similar part to the BC547, with max collector-emitter voltage of 40V and a maximum collector current of 100-200 mA depending on which version you get. Beta is in the 100-300 range; here's a Fairchild data sheet. It comes in a TO-92 package and a variety of surface-mount packages. It's a little faster than the BC547, which generally won't matter in synth applications. The main thing you have to watch is the max emitter-base reverse breakdown voltage of 6V. Mouser is quoting them at five cents USD apiece, quantity 25. The 2N3906 is a complementary PNP type.
* 2N2222. The easy-to-remember "all twos" NPN transistor has some virtues over the above, mainly that it can handle more collector current: 500-600 mA for most versions. It is still available in the TO-18 metal can as well as the TO-92 package; the former is good for applications where you need to thermally couple it to something (e.g., the expo converter circuit in a VCO/VCF), and some people claim that the extra parasitic capacitance of the TO-18 can makes it sound a bit better in audio applications. Here's an ST Microelectronics data sheet; collector-emitter voltage is 40V and beta is in the 100-300 range. As in the case of the 2N3904, you have to watch the 6V base-emitter reverse breakdown voltage. Main disadvantages are the somewhat higher cost (cheapest one I see at Mouser is $0.54 USD, quantity 25), and the fact that there is no exactly complementary PNP type.
On To the Next
As you can see from the date of my last post, it's taken me a long time to get this one together, for various reasons. This is the end of the series on discrete parts; next, we'll swing into integrated circuits with a discussion of operational amplifiers -- op amps.
One of my recent acquisitions for the Discombobulator, as documented in the previous post, is an Encore MFS01 Frequency Shifter, in MOTM format. Encore also offers this in Frac and Euro formats; the MOTM-format unit had been out of stock for some time, but this year Encore has been doing new runs of its MOTM-format modules. I received mine a couple of months ago, but I didn't have a place to install it until I built the new block that I documented in my previous post. So I'm just now getting to play with it.
A Frequency Shifter is Not the Same as a Pitch Shifter
So what's a frequency shifter? Well, I had a bunch of material that I wrote to address all that, but I've decided to save it for a follow-up post. To keep it short, a frequency shifter is sort of like a pitch shifter, but it does not maintain the harmonic or musical relationships between the various tones and sounds that make the input. What's that good for? Well, for one thing, it's great for bell and chime sounds, and it behaves a lot more predictably than a ring modulator in doing that job. It can do flanging-like effects that range from subtle to startling. It can do creation of sounds that play in unusual intervals and scales. But if you want to completely brutalize a sound, rip it apart and then glue the pieces back together like a ransom note, a frequency shifter is what you want.
Why Frequency Shifters are Usually Expensive
The classic frequency shifter is the one developed by Harald Bode in the early '70s and licensed to Moog. They were sold both as modules for Moog modulars, and as stand-alone devices. They were as renown for their sound as they were notorious for their price tag -- they sold for about $1000 in 1975 dollars. Obviously that put them out of reach of most musicians, and so not that many were made. They are quite rare and valuable now.
One of the reasons the Bode frequency shifter was so expensive was the sheer amount of circuitry they contained. There are several hard problems that a frequency shifter design has to address. One of these is the problem of generating two sine-wave carrier signals "in quadrature", that is, identical in frequency but separated in phase by 90 degrees. The Bode design was noted for its ingenious approach to this, which produced a very clean pair of carrier signals, but it was costly to implement.
With the benefit of three decades of subsequent technology development, Encore was able to take a different approach to this problem, using an option that wasn't available to Bode: go digital for the carrier generation portion. Encore incorporated a microprocessor that generates the quadrature sine waves from (I presume) an internal look-up table. This makes it easy to maintain the phase relationship while responding to panel controls and control voltage, and it takes a lot less circuitry than the Bode design. Thanks to this, Encore is able to offer their frequency shifter at a lower cost ($399 USD) than competing models from Modcan (the Modcan 39B sells for about $1050 as I write this) and Club of the Knobs (the COTK 1630 lists for E950, about $1250 at the moment). Note that the audio signal path is still all analog -- only the generation of the carriers is digital.
(In fairness, it should be noted that Modcan has two models, the all-analog 39A/B, and the all-digital 65B, which is a dual unit. The 65B currently list for $770.)
The Panel and Controls
With that, let's take a detailed look at the panel. Here's the top portion:
The large Initial Shift and the smaller Fine Shift knobs together set the amount of frequency shift. The Fine Shift knob has a range of about 150 Hz in either direction. The Initial Shift control has a range of approximately 3500 Hz; on the unit I have, all of the action takes place between, roughly, the -4 and +4 positions of the knob -- the 4-to-5 areas have no effect. The Initial Shift knob has a deadband at the zero mark that is useful in using the Fine Shift knob to achieve small frequency shift settings. However, the deadband is also rather disconcerting because there is a jump of about 100 Hz when the knob is moved off of the deadband. To do shifts of less than 100 Hz, you must center the Initial Shift knob on the deadband and then use the Fine Shift.
The Input Gain control attenuates the input signal to the frequency shifting circuits. Note that said signal consists of a mix of three things: the signal from the input jack, and the signals being fed back to the shifting circuitry by the Up Feedback and Down Feedback controls. When using the feedback, you'll find that you have to turn this down some to avoid overloading the input. The red LED next to the knob indicates clipping. The Frequency CV attenuates the signal being fed into the frequency control voltage jack.
Below is the lower half of the panel:
We'll start with the jacks. The Input jack is obviously the input for the audio signal to be processed. The CV jack accepts a control voltage (range +/- 5V) which controls the amount of frequency shift, along with the Initial Shift and Fine Shift knobs. Note that response to control voltage is linear, at a rather measly 100 Hz/volt with the Frequency CV control on 10, so you can't do huge sweeps with the CV. The Up Out and Down (DN) Out jacks are the two outputs from the shifter. The Down Out output responds to the reverse of the shift controls and the CV; in other words, when the Initial Shift knob is turned to the right, the output at the Up Out jack increases in frequency, but the output at the Down Out jack decreases in frequency. Note that the signal present at both of these jacks is 100% "wet"; there is no provision for mixing in any of the unprocessed dry signal. If you want that, you have to use an external mixer.
The Up Feedback and Down Feedback knobs feed some of the output of the Up Out or Down Out jacks, respectively, back to the input. Be careful with the Down Feedback since it can create a "ping-pong" resonance in the circuit which can result in runaway feedback if the input signal hits a resonant frequency. If this occurs, you have to turn the input trim control down to 0 to clear it, which could be embarrassing in live performance.
The device makes available the two sine-wave carrier signals at the Sine Out and Cos Out jacks. The two knobs control the signal level present at these jacks. These two signals will always be at the same frequency, which is determined by the amount of frequency shift (which means their frequency is also effected by the frequency shift CV). The cosine signal leads the sine signal by 90 degrees when the frequency shift is positive; the opposite is true when the frequency shift is negative. The two lights next to the knobs light when each signal is near its positive peak; at low frequencies, they provide visual indication of both the frequency and the direction of the shift.
Testing and Demonstration
I started off doing some tests with some simple waveforms; here is a clip, which I'll refer to as we go through the description. (The clip is an uncompressed WAV file, to avoid any MP3 artifacts that might be triggered by the unusual timbres.) First, I fed a sine wave into the input and tried various frequency shift settings, starting with the fine shift. The pitch of the sine wave goes up like you would expect it to, within the range of the fine shift control; this is at 0:07-0:25 on the clip. Then, I advance the Initial Shift as far up as it will go. This is at 0:33-0:56.
Next, I move the Initial Shift into the downward range. Going the other way, something interesting happens which is characteristic to frequency shifters. As you turn the knob left of the zero mark, the pitch gets lower and lower, goes through bass and subsonic -- and then starts going up again. What's happening is that the shifter has actually taken the input through zero Hz, and it is now outputting a "negative" frequency. As it happens, a negative frequency sounds the same as the corresponding positive frequency; the output signal is inverted, but you can't hear that in isolation. But you notice the difference as the source goes up and down: when the frequency of the source decreases, the frequency of the output increases, and vice versa! It's called frequency reversal, and it makes sense when you look at the math: if the source is at 400 Hz, and the frequency shifter takes it 1000 Hz in the negative direction, the resulting frequency is -600 Hz (which sounds the same as positive 600 Hz). If you lower the frequency of the input, it's going towards zero and the subtraction is moving further away from zero, taking the absolute value of the output higher. If you lower the input to 100 Hz, now the output is at -900 Hz. This is the much-talked-about through-zero operation, and it's something that conventional pitch shifters can't do. In the clip, it starts at 0:56; it goes to subsonic and starts into the negative frequency range at 1:00. From 1:10 to 1:18, I twiddle the frequency knob on the source VCO, and the frequency coming out of the shifter responds opposite to it; this is frequency reversal. If you do this with a signal containing a mix of tones, the higher tones will be lower than the lower tones in the frequency-reversal region, which can do some truly bizarre things to natural sounds like voices and animal noises.
In the clip, you can hear a subtle change in the timbre of the sine wave as I manipulate the frequency shift. Now, a sine wave should not have its harmonic content effected by the frequency shifter since it theoretically doesn't have any harmonic content; it should just go up and down in pitch, with no noticable change in timbre. That isn't quite what happens: as I went up in frequency, some odd sub-harmonic tones started to appear. They weren't very loud, but they were audible. I'm not sure where they are coming from. These also appeared as overtones when I went into the negative frequency range. Perhaps the module is picking up electrical noise from something else in the room. Another possibility is that the module's carrier suppression is not quite perfect, and the carrier or some overtone of it is leaking into the output. It could also be that the sine-wave output on the VCO that I used as the source (a Dotcom Q106) is not absolutely pure; that would not be unexpected, since producing pure sines is a very difficult thing for a VCO to do. (I did also try the fixed 440 Hz generator from a Q125 standards module. Although the sine wave from that is noticeably more pure, the artifacts were much the same. So I'm thinking that some of the artifacts are carrier leakage.)
From 1:24 to 1:45, I turn up the Initial Shift again, and then bring in some Up Feedback, and then from there until 2:09 is the Down Feedback. You can hear the effects of these. Be careful when you listen to the latter part; it hits a couple of of the aforementioned ping-pong resonances that cause level spikes.
Next, I ran a sawtooth wave through the frequency shifter, with far more dramatic and impressive results. Shifting the sawtooth wave by small amounts in either directing results in beating and interference patterns as the harmonics are moved off of the integer relationships; the waveform starts to intermodulate with itself. You can hear this in the Fine Shift demonstration from 2:29-2:48, A large shift upwards transforms the sawtooth wave into something distorted-sounding, eventually ending up sounding a bit like it's overdriving a filter with the resonance turned way up. This is in the clip at 2:50-3:10.
Negative frequency shift is even weirder; as the frequency goes through zero, first the fundemental and then the harmonics all go through zero and get inverted, and you can hear them doing it one at a time as the Initial Shift knob is gradually turned more to the left. And the timbre is just strange. This is in the clip at 3:11-3:35. The really interesting bit here is when the source frequency is varied while in the negative-frequency range. The effect is hard to describe; some harmonics go up in frequency, some go down, and the only words I can come up with is that it turns the sound "inside out". That makes no sense, so you just have to hear it, at 3L36-3:46. The up and down feedback controls were very successful at destroying the waveform, creating a bunch of noises that sounded like various forms of exotic radio broadcasts, or perhaps modem noises. Up feedback is demonstrated at 3:54-4:12, and down feedback is at 4:18-4:35.
Interestingly, through most of the things that I did with the sawtooth, there seemed to be a bit of the unaltered input waveform leaking through. I wondered about that, since I didn't hear it with the sine wave. I think it might be this: a sawtooth wave has a portion of the waveform that is basically an impulse -- on the scope it's nearly vertical. Fourier analysis tells us that an impulse or spike is a waveform of indefinite frequency; in theory, a perfectly vertical spike (impossible to achieve with finite bandwidth) contains every possible frequency. For this reason, I think the frequency shifter simply couldn't do anything with that part of the waveform; the math breaks down. The result is that it sounds like an extremely narrow pulse wave, at the pitch of the unaltered input, is riding through the circuit. That would explain why I did not hear it with the sine wave; a sine wave does not have a steep slope anywhere in its waveform.
Some Usage Ideas
So what can you do with the Encore frequency shifter? Here are a few examples. One use is to create phasing/flanging/stereo simulation effects. Connect a mono signal to the input. Center the Initial Shift knob in the deadband and set both of the feedback knobs on zero. Take a stereo out using the up out as one channel and the down out as the other. Or, for a better enhanced effect, run the up out and down out to an external mixer. Mix the up out with some of the dry signal and pan that hard left; then mix the down out with some of the dry signal and pan that hard right.
A frequency shifter is of course the bees' knees at creating bell, chime, and other clangorous timbres. You'll have the best luck with sounds that don't contain a lot of closely spaced harmonics. I actually had good results feeding FM noises into it. Treat with the appropriate envelopes, and you've got the bells of doom. The nice thing about it is that the apparent pitch of the resulting sound is pretty predictable, much more so than if you use a ring modulator to create bell sounds. So you can make it sorta kinda play in scale. I managed to do some things that played more or less in scale over an octave. Precision tuning it won't do, but that isn't what you get a device for. If you want the strange sound but you also want it to play in tune, sample it; the circuits are very stable and will hold a specific timbre while you set it up for sampling.
However, I found that what this unit does best is brutalize sounds. There are a million ways to make it distort a sound -- but in a completely different way from what a clipping or fuzz circuit does. It excels at creating sounds that have a lot of closely spaced overtones with weird quasi-random beating and pulsing going on. And when you get into the frequency inversion regime, the results are indescribably weird.
The sine and cosine carrier outputs can be used usefully as control voltages for various purposes. For example, you can do a simple rotary-speaker effect by feeding each one to the CV input of a VCA. Feed the same audio into both VCAs, and then take their outputs to a stereo mixer panned left and right. The shift controls will control the speed of the apparent rotation, in either direction.
In short, this isn't a pitch shifter. You won't use it to correct clam notes in a track. You will use it to make bizarre and other-worldly sounds. Getting it to behave predictably may be a bit of a struggle. But if all of life was predictable, what fun would that be? The opportunities for serendipitous discovery are huge here. Run stuff through it, turn the knobs, and see what happens. Then, build a patch that uses that. You may surprise yourself.
Here's the usual sequence of build photos. I finally used up all the scrap 3/8" plywood (left over from a years-ago repair job at our previous house) that I've been using to build the bases of these things. I decided I wanted something a bit sturdier anyway, so I got some 5/8" plywood. Here, the rail pieces have been cut and are laying on the base, and the front rail has been glued in position and is clamped on while the glue sets.
All of the rail pieces are in place except for the top rail. The two rear posts (to hold up the top) were cut from a piece of dowel that I had laying around from some long-ago job.
After building the last block (Iapetus), I swore that I'd use its power supply configuration from now on. So of course I changed it again. However, the reason I did so was because I acquired a used Power One triple-voltage supply from a regular poster at Muff's for a very reasonable price. The Power One supplies are functionally and physically interchangeable with the Condor supplies that I usually use. Both are considered good brands; I usually use Condor because that's the brand that Mouser carries.
Here's the supply and the MOTM-990 power distribution board, in place but not yet mounted on the base. It was a little tight, and I wanted to get all the wiring connected before I screwed them down. The supply came with leads already soldered on the +/-15V side, and a power cord installed on the transformer. Unfortunately, someone had used a shielded cable for the power cord. Never ever use shielded cable for power! I had to remove that and make a new one out of zip cord.
Here is the completed power supply installation. The +/-15V comes from the part of the supply to the right of the transformer; the last owner had configured it for 12V, and I had to track down and remove two soldered jumpers to convert it to 15V. Fortunately, the instructions for how to do so are printed on the back of the chassis. The +5V comes from the smaller board to the left of the transformer. Everything is wired to the MOTM-995 distribution board, which has connectors for both MOTM 4-pin and 6-pin standards. The 6-pin connectors have the +5V and can be used to power a Dotcom or similar module, by constructing an adaptor cable. The line cord comes in at the bottom left of the picture and has an in-line AGC standard fuse holder on the hot side. I fuse these at 1A.
Some close-ups of the installed modules. First, the bank of MOTM-310 VCOs. I bought these as a package; I intend to use them for FM experimenting.
The Mankato Filter. This is an unusual filter, designed originally by Thomas Henry, that produces different response characteristics depending on which of the output jacks in the big circle you plug into. When the resonance is turned up to self-oscillation, it becomes an 8-phase sine wave VCO; each output jack is 45 degrees advanced from the previous one.
The Encore Frequency Shifter. Unlike a pitch shifter, a frequency shifter shifts each partial in the input signal by the same amount, in terms of Hz. This means that, unlike the pitch shifter, it does not maintain the harmonic relationships that are present in the input signal. Frequency shifters are complex circuits and are usually very expensive; Encore figured out how to build a less expensive one by using a microprocessor to generate internal control signals. It is capable of doing quite brutal things to a signal! The MOTM-formatted version was out of production for a long time, but when Encore announced they were doing another run of them early this year, I jumped on it.
I'll have reviews and sound samples of these coming up over the next two weeks.
Jeez, have I actually not posted anything since April? Anyway, there is a new Statescape, Mississippi, here. There's some info about it on the Web site. I'm building the long-delayed block 5 of the modular; I'll try to get a post up about it this weekend.
The idea for this post started from a thread on VSE, talking about the use of test equipment as electronic music instruments. There is, of course, a long history of such, actually pre-dating the invention of synthesizers per se. Significant parts of early electronic themes and soundtracks, such as the "Doctor Who" theme and the soundtrack to Forbidden Planet, were realized using laboratory electronic test equipment such as oscillators, RF modulators, and sweep generators.
The function generator I have is a Hewlett-Packard 3312A, one of the company's last analog models. (Note that the former Hewlett-Packard test instrumentation division is now owned by Aglient.) Its controls and capabilities are pretty typical for its era; I think the one I have was built around 1980. It is a solid-state, purely analog device. Here is a look at the front panel:
The front panel is divided into two sections: the main generator section, and a modulation generator section. The left two-thirds of the panel contains the controls for the main generator:
The big knob on the left is the frequency control, which moves through a range of 0.1 to 1 to 13. The big plastic dial is part of the knob and rotates with it. The frequencies available are chosen by pressing one of the nine decade buttons across the top, which provide ranges (with the frequency knob set to 1) from 0.1 Hz to 1 MHz. The knob multiplies the decade setting, so for example, if the knob is at the 4 position and the 1 KHz button is pressed, the output frequency is 4 KHz.
The signal generator can produce one of three basic waveforms: sine, square, and triangle. The FUNCTION buttons at the top right select the waveform. There are several controls that can modify the waveform. The first and most basic is the amplitude control. This consists of two nested knobs; the inner knob is a rotary switch that selects a peak-to-peak voltage range of 10 mV, 100 mV, 1V, or 10V. The outer knob is a vernier control that varies from 0 to the selection of the inner knob.
The symmetry control, to the right of the amplitude control, varies the generator's time base such that other derived waveforms can be produced. The knob is only effective when the blue "CAL" button in the middle of the knob is out; when it is pressed in, the waveform is symmetrical and the knob's position is ignored. It effects all three waveforms. When square wave is selected, the symmetry knob serves as a pulse width control. For the triangle wave, the knob at its extremes produces ramp and sawtooth waves. Here's an example of the symmetry being varied from triangle to ramp:
With sine selected, it produces various distortions of the sine wave; turning it all the way to either extreme produces a waveform that moves through half of a sine wave and then jumps back to the starting point. Example:
I noted the pitch variations caused by the use of the symmetry knob. As I understand the circuit (I still need to look at it some more), the generator core is a triangle-core VCO, with separate integrating capacitors for the positive-going and negative-going halves of the waveform. The symmetry control works by varying the charging current for the two capacitors; it increases one to speed up that half of the waveform, and decreases the other to compensate so that the overall wavelength, in theory, remains the same. Apparently that process isn't perfect, or perhaps my unit is just in need of calibration.
The offset knob adds a negative or positive DC offset, when the blue CAL button is out. Offset can range up to +/- 10V, but the manual cautions that if it causes the waveform to exceed 10V in either direction, clipping and possible damage to the output circuit will result. Pressing the CAL button in cancels the offset. The phase knob is used to produce non-continuous waveform bursts. I'll get to that in a minute; for now, note that it has to be in the "free run" position in order for the generator to run.
The waveform emerges from the jack under the amplitude control. Note that all of the jacks on this unit are BNC (bayonet) jacks, which are common in test equipment but not generally used in synths or audio production, so you'll need an adaptor. Also note that the settings on the amplitude control are rated for connecting the output to a 50-ohm load. Most synths and audio equipment have a much higher input impedance than this, which means that the peak-to-peak output voltage will be higher than indicated by the knob position. You'll have to tweak the amplitude control to get the output level than you want, and avoid clipping. The jack to the left, marked "sync", outputs a square wave whose trailing edge is at the positive-going zero crossing of the main output waveform. This can be useful for syncing other oscillators or sequencers.
The area occupying the right 1/3 of the panel is the modulation section:
The modulation section contains all of the circuitry needed to do amplitude modulation (AM) or frequency modulation (FM) on the main generator's waveform. It also contains its own low frequency oscillator to use as a modulation source. The three buttons at the upper left allow the selection of AM, FM, or sweep (labeled SWP) generation. These are not radio buttons; they are individual on-off switches, and any combination can be selected. To the right of those buttons, the next three allow selection of the LFO waveform: sine, square, or triangle. A concentric control underneath the waveform selection buttons controls the LFO frequency: the outer knob is a four-position rotary switch that can select a range of up to 1 Hz, 100 Hz, or 10 KHz. (The 0 position is for setting up the sweep function; it "freezes" the LFO at a zero crossing point.) The inner knob is a vernier that allows selection of the desired frequency within the selected range.
The knob directly underneath the modulation selection buttons (the one with the little arrow) controls the amount of modulation. The modulation generator has its own symmetry knob, whose CAL position is a click-stop at full counterclockwise rotation. The BNC jack in this section is both an input and an output; when the internal LFO is in use, its waveform is output from this jack. When external modulation is selected (by partially depressing and then releasing a waveform selection button, so that all three of the sine, square, and triangle buttons are out), it is input via this jack.
The AM is powerful and useful; both the modulation generator and external modulation can achieve 100% modulation of the generator output (from 0V output to maximum output). AM is a very useful capability which is seldom found on commercial synths for some reason. Here's an example of a triangle wave being AM modulated with a square wave:
The FM mode, on the other hand, is limited to +/- 5% of the carrier frequency. Audio FM synthesis usually requires far more modulation than that, so this is not very useful from a musical perspective. (There is another way to do it, which is described further down.) Example of a sine wave, FM'ed with a square wave (the photo is a multiple-trace exposure):
The "sweep" modulation mode is one of those things that is common and useful for the function generator's intended purpose, but rather peculiar when viewed from a musical standpoint. In this mode, basically the modulation generator creates a ramp wave which can FM the main generator from the minimum to the maximum frequency for the frequency range selected. The starting frequency, ending frequency, ramp rate, and time between sweeps are all controllable. There are several controls which have alternate purposes in the sweep mode (which unfortunately are not labeled on the panel). First, the modulation frequency control's range switch must be set to the 0 position. Once this is done, the modulation percentage knob sets the starting frequency of the sweep. The modulation symmetry control sets the rate of the sweep, and the main frequency control sets the ending frequency. The modulation frequency vernier sets the repetition rate. The controls interact to an extent, and I had to do a fair amount of experimenting to get something that would be illustrative when viewed on the scope. I finally settled on a high frequency triangle wave with a fairly slow repetition rate for the scope photo. The audio sample is a mix of different settings.
The one other interesting mode of operation is the "burst" mode. In this mode, the function generator can be made to produce either single-cycle waveforms, or short bursts of waveform separated by periods of 0V output. The burst mode is activated by turning the main generator's trigger phase knob (below the symmetry knob) away from the "free run" position.
Two switches and a jack on the rear panel come into play: the slide switch at the upper left of the rear panel selects the single-cycle or multi-cycle burst mode. The switch below it selects internal or external trigger. With single cycle and internal trigger selected, the main generator will output one complete cycle of whatever waveform is selected, repeating at a rate determined by the LFO in the modulation section. The trigger phase control determines the starting and ending phase of the cycle. With the single/multi slide switch set to multi and internal trigger selected, the square wave of the modulation LFO gates the main generator on and off. When the LFO square wave goes high, the main generator starts at the phase selected by the trigger phase knob and continues to output until the LFO square wave goes low. When it does so, the main generator completes the cycle it is on and then stops. In either single or multi mode, when the trigger switch is set to external, gating of the main generator is controlled by the signal input at the rear panel EXT jack. This signal needs to be at "TTL" levels; that is, 5V for the high state, and 0V for the low state. Here's an example of the burst function at work, with the triangle waveform, and with the phase control and burst interval being varied. The first two-thirds is multi-burst mode; the last portion is single cycle burst mode.
Same as above, but with a different starting phase setting:
The rear panel VCO jack provides for a control voltage to control the main generator frequency. Unfortunately, the signal format is not even slightly compatible with the 1V/octave standard used in most synths. Basically, with the frequency control set at its minimum position, the VCO input does what the frequency control does; that is, it varies the frequency from the minimum to the maximum for the frequency range selected. All of the ranges constitute 10:1 ratios between maximum and minimum frequency, so if you do the math, that's slightly over three octaves. However, the VCO input is linear, that is, it is a V/hertz input. The scaling is about 0.2V per one-tenth of the frequency range (so, for example, if the 10 KHz range is selected, it's 0.2V per 1 KHz.) It also uses negative voltages; the minimum frequency is at 0V and the maximum is at -2V.
To wrap this up: Just for fun, here's a mix of the six sample waveforms: Audio -- click here