Overview of Nonlinear Distortion Phenomena

Một phần của tài liệu intermodulation distortion in microwave and wireless circuits (Trang 25 - 37)

To get a first glance into the richness of nonlinearity, let us compare the responses of simple linear and nonlinear systems to typical inputs encountered in our wireless

Figure 1.8 (a) Power transfer, and (b) gain characteristics of a typical RF quasilinear amplifier.

telecommunications environment example. Those stimulus inputs are usually sinu- soids, amplitude and phase modulated by some baseband information signals, which take the form of

x(t)=A(t) cos [␻ct +␪(t)] (1.6) For that, we will restrict the systems to be represented by a low-degree polyno- mial, yNL(t)=SNL[x(t)], such as

yNL(t) =a1x(t −␶1) +a2x(t−␶2)2+a3x(t −␶3)3+. . . (1.7) which we will assume is truncated to third degree.

Although this polynomial of the delayed stimulus is only a short example of all the nonlinear operators we could possibly imagine, modifying its coefficients and delays allows us to approximate many different continuous functions. Further- more, if the input signal level is decreased enough, so thatx(t)>>x(t)2,x(t)3, the polynomial smoothly tends to a linear system ofyL(t)=SL[x(t)] =a1x(t−␶1).

So, while the response of this linear system to (1.6) is

yL(t) =a1A(t−␶1) cos [␻ct+␪(t−␶1)−␾1] (1.8) the response of the nonlinear system would be

yNL(t)=a1A(t −␶1) cos [␻ct +␪(t −␶1) −␾1]

+a2A(t−␶2)2cos [␻ct+␪(t−␶2)−␾2]2 (1.9) +a3A(t−␶3)3cos [␻ct+␪(t−␶3)−␾3]3

which, using the following trigonometric relations, cos (␣) cos (␤) =1

2cos (␣−␤) +1

2cos (␣+␤) (1.10)

⇒冦cos (cos (␣␣))23==1234cos (+12cos (2␣)+14␣)cos (3␣)

can be rewritten as

yNL(t) =a1A(t −␶1) cos [␻ct+␪(t−␶1)−␾1] +1

2a2A(t −␶2)2+1

2a2A(t −␶2)2cos [2␻ct+2␪(t−␶2)−2␾2] +3

4a3A(t −␶3)3cos [␻ct +␪(t−␶3) −␾3] +1

4a3A(t −␶3)3cos [3␻ct+3␪(t −␶3)−3␾3] (1.11) where␾1=␻c␶1, ␾2=␻c␶2, and␾3=␻c␶3.

The case of most practical interest to microwave and wireless systems is the one in which the amplitude and phase modulating signals,A(t) and␪(t), are slowly varying signals, as compared to the RF carrier cos (␻ct). If the system’s time delays are comparable to the carrier period (a simple case where the system does not exhibit memory to the modulating signals), they are thus negligible when compared to the envelope amplitude and phase evolution with time. Hence, (1.8) and (1.11) can be rewritten as

yL(t)=a1A(t) cos [␻ct+␪(t)−␾1] (1.12) and

yNL(t) =a1A(t) cos [␻ct+␪(t) −␾1] +1

2a2A(t)2+1

2a2A(t)2cos [2␻ct+2␪(t) −2␾2] +3

4a3A(t)3cos [␻ct +␪(t)−␾3] +1

4a3A(t)3cos [3␻ct+3␪(t) −3␾3] (1.13) The first notorious difference between the linear and the nonlinear responses is the number of terms present in (1.12) and (1.13). While the linear response to a modulated sinusoid is a similar modulated sinusoid, the nonlinear response includes many other terms, usually named asspectral regrowth,beyond that linear component. Actually, this is a consequence of one of the most important and distinguishing properties between linear and nonlinear systems:

Contrary to a linear system, which can only operate quantitative changes to the signal spectra (i.e., modifying the amplitude and phase of each spectral component present at the input), nonlinear systems can qualitatively modify spectra, as they eliminate certain spectral components, and generate new ones.

Two of the best examples for illustrating this rule are the rectifier (or ac/dc converter) response to a pure sinusoid, and the corresponding output of a linear filter. While the latter can, at most, modify the amplitude and phase of the input sinusoid (but can neither destroy it completely nor generate any other frequency component), the ac/dc converter eliminates the ac frequency component and trans- fers its energy to a new component at dc.

In our wireless nonlinear PA example, the nonlinear output components pre- sented energy near dc, or 0␻c, the second and third harmonics, 2␻cand 3␻c, etc., but also over the linear response, ␻c, as was shown in Figure 1.5.

The component at dc shares the same origin as the dc output in the mentioned rectifier. In practical systems, it manifests itself as ashift in bias from the quiescent point (defined as the bias point measured without any excitation) to the actual bias point measured when the system is driven at its rated input excitation power.

This bias point shifting effect has been for long time recognized in class B or C power amplifiers, which draw a significant amount of dc power when operated at full signal power, but remain shut down when the input is quiet.

Looking from the spectral generation view point, that dc component comes from all possiblemixing, beat or nonlinear distortion productsof the form cos (␻it) cos(␻jt), whose outputs are located at ␻x=␻i −␻j, and where␻i =␻j.

The other components located around dc constitute a distorted version of the amplitude modulating information, A(t), as if the composite signal of (1.6) had suffered an amplitude demodulation process. They are, therefore, called the base- band components of the output. Their frequency lines are also generated from mixing products at␻x=␻i−␻j, but now where␻i≠␻j.

The components located around 2␻cand 3␻care, for obvious reasons, known as the second and third-ordernonlinear harmonic distortion,or simply the harmonic distortion. Note that they are, again, high-frequency sinusoids amplitude modulated by distorted versions of A(t).

The cluster of spectral lines located around 2␻c is generated from all possible mixing products of the form cos (␻it) cos (␻jt), whose outputs are located at␻x

= ␻i + ␻j, and where ␻i =␻j (␻x =2␻i =2␻j) or ␻i ≠ ␻j. The third harmonic cluster has its roots on all possible mixing products of the form cos (␻it) cos (␻jt) cos(␻kt), whose outputs are located at ␻x =␻i + ␻j + ␻k, and where ␻i = ␻j =

k(␻x=3␻i=3␻j=3␻k),␻i =␻j ≠␻k(␻x=2␻i +␻k=2␻j +␻k), or even ␻i

≠␻j≠␻k.

Finally, the components located near ␻c are distorted versions of the input.

They include newly generated lines that fall around the original spectrum, but also lines that share exactly the same position as the linear response, and thus are indistinguishable from it. Contrary to the baseband or harmonic distortion, which are forms ofout-of-band distortion,and thus could be simply discarded by bandpass filtering, some of these newinband distortion components are unaffected by any linear operator that, naturally, must preserve the fundamental components. Thus, they constitute the most important form of distortion in bandpass microwave and wireless subsystems.1Actually, the impairment of nonlinear distortion in telecom- munication systems is so high, when compared to linear distortion, that it is common use to reserve the name ‘‘distortion’’ for nonlinear distortion. Accordingly, in the

1. Strictly speaking, the distinction between inband and out-of-band distortion components only makes sense when the excitation has already a distinct bandpass nature, as in the RF parts of microwave and wireless systems. In baseband subsystems, the various clusters of mixing products overlap, and they all perturb the expected linear output.

remainder of this text, we will use the terms ‘‘nonlinear distortion’’ or simply

‘‘distortion’’ as synonyms, unless otherwise expressly stated.

Referring again to the wireless system example of Figure 1.1, Figure 1.9 shows exactly that inband distortion effect, by comparing the bandpass filtered version of our PA nonlinear response to a scaled (or linearly processed) replica of its input.

Although the bandpass filter has recovered the sinusoidal shape of the carrier—a clear indication that the harmonics have effectively been filtered out [Figure 1.9(b)]—the amplitude envelope is still notoriously distorted, which is a manifesta- tion that inband distortion was unaffected by filtering.

For studying these inband distortion components, we have to first distinguish between the spectral lines that fall exactly over the original ones, and the lines that constitute distortion sidebands. In wireless systems, the former are known as

Figure 1.9 The effect of bandpass filtering on the inband and out-of-band distortion. (a) Time- domain waveforms of the wireless system’s PA input and filtered output signal amplitude envelopes. (b) Close view of the actual modulated signals showing the detailed RF waveforms.

cochannel distortionand the latter asadjacent-channel distortion,since they perturb the wanted and the adjacent-channels, respectively.

In our third-degree polynomial system, all inband distortion products share the form of cos (␻it) cos(␻jt) cos(␻kt), whose outputs are located at ␻x =␻i +

j−␻k. And, while both cochannel and adjacent-channel distortion can be gener- ated by mixing products obeying␻i =␻j ≠␻k(␻x=2␻i −␻k=2␻j −␻k) or ␻i

≠␻j ≠␻k, only cochannel distortion arises from products observing␻i=␻j=␻k

(␻x=␻i) or␻i ≠␻j =␻k(␻x=␻i).

To get a better insight into these inband distortion products, let us imagine we have a stimulus that is a combination of the modulated signal of (1.6) plus another unmodulated carrier, as was conceived in the system of Figure 1.1:

x(t) =A1(t) cos [␻1t+␪(t)]+A2cos (␻2t) (1.14) Although this excitation can be viewed as our modulated signal plus an interfer- ing carrier, it could be also understood as two of the spectral lines of (1.6), or even as the addition of two similar modulated signals, with the exception that now we are explicitly showing the amplitude and phase variation of one of the carriers and omitting that for the other.

Since the input is now composed of two different carriers, many more mixing products will be generated. Therefore, it is convenient to count all of them in a systematic manner. For that, we first substitute the temporal input of (1.14) by a phasorial representation using the Euler expression for the cosine:

x(t)=A1(t) cos [␻1t+␪(t)] +A2cos (␻2t) (1.15)

=A1(t) ej[␻1t+␪(t)]+ej[␻1t+␪(t)]

2 +A2ej␻2t +ej␻2t 2

which leads us to the conclusion that the input can now be viewed as the sum of four terms, each one involving a different frequency. That is, we are assuming that each sinusoidal function involves a positive and a negative frequency component (the correspondent positive and negative sides of the Fourier spectrum), so that any combination of tones can be represented as

x(t)= ∑Q

q=1

Aqcos (␻qt) =1 2 ∑Q

q=−Q

Aqejqt (1.16) whereq≠0, andAq=A*−qfor real signals.

Havingx(t) in this form, the desired output is determined as the sum of various polynomial contributions of the form

yNLn(t) = 1

2nanq∑=−QQ Aqejqtn (1.17)

= 1

2nanQ

q1=−Q

. . .Q

qn=−Q

Aq1. . . Aqnej(␻q1+. . .+␻qn)t

whose frequency components are all possible combinations of the input ␻q:

n,v =␻q1+. . .+␻qn (1.18)

=mQ␻−Q +. . .+m−1␻−1+m1␻1+. . .+mQQ

wherev=[mQ . . . m−1m1. . . mQ] is thenth order mixing vector, which must verify

Q

q=−Qmq=mQ +. . .+m−1+m1+. . .+mQ =n (1.19) For example, a two-tone input like the one of (1.14) will produce the following mixing products of order 1,␻1,v:

␻1,v= −␻2, −␻1, ␻1, ␻2 (1.20) the following of order 2,␻2,v:

␻2,v= −2␻2, −␻2−␻1,−2␻1,␻1−␻2, dc,␻2−␻1, 2␻1, ␻1+␻2, 2␻2

(1.21) and the following ones of order 3, ␻3,v:

␻3,v= −3␻2, −2␻2−␻1, −␻2−2␻1, −3␻1, −2␻2+␻1, −␻2,−␻1, −2␻1+␻2, 2␻1−␻2,␻1, ␻2, 2␻2−␻1, 3␻1, 2␻1+␻2, ␻1+2␻2, 3␻2

(1.22) Obviously, each of these mixing products can be generated by different arrange- ments of the same input tones. For instance, 2␻1−␻2can be generated from three different manners as:␻1+␻1−␻2,␻1−␻2+␻1and −␻2+␻1+␻1, whereas

␻1can be generated from the following different combinations:␻1+␻1−␻1,␻1

− ␻1+ ␻1, −␻1+␻1+␻1, involving only ±␻1; and ␻1+ ␻2− ␻2, ␻1− ␻2+

␻2,␻2+␻1−␻2,␻2−␻2+␻1, −␻2+␻2+␻1,−␻2+␻1+␻2, involving␻1

and ±␻2.

Actually, the number of these possible combinations can be directly calculated from the multinomial coefficient:

tn,v= n!

mQ! . . .m−1!m1! . . .mQ! (1.23) In fact, since the spectral line at 2␻1−␻2is characterized by the mixing vector

␯=[1 0 2 0], it will lead to a multinomial coefficient of

tn,v= n!

mQ! . . .m−1!m1! . . .mQ!= 3!

1!0!2!0!=3 (1.24) while the spectral line at␻1can be given by a mixing vector of␯1=[0 1 2 0] and another one of ␯2=[1 0 1 1] leading to the following multinomial coefficients:

tn,v1= 3!

0!1!2!0!=3 and tn,v2= 3!

1!0!1!1!=6 (1.25) So, according to these derivations, the output of (1.7) to (1.14) can be calculated from the polynomial response to (1.15) and then converted again to cosines using the Euler relation. Alternatively, noting that the output spectrum must be symmetri- cal, this result may also be determined by calculating all the possible mixing vectors generating only positive frequencies, and their corresponding multinomial coefficients, and then recovering the cosine representation simply multiplying these coefficients by 2. That is, the amplitude of each mixing product will be tn,v/2n−1 except, naturally, if it falls at dc where it will betn,v/2n. Using this procedure [and again the assumption of slowly varyingA(t) and␪(t)], the desired output of (1.7) to (1.14) was found to be

yNL(t)=a1A1(t) cos [␻1t+␪(t)−␾110]+a1A2cos (␻2t−␾101) +1

2a2[A1(t)2+A22]+a2A1(t)A2cos [(␻2−␻1)t−␪(t)−␾2−11] +a2A1(t)A2cos [(␻1+␻2)t+␪(t)−␾211]

+1

2a2A1(t)2cos [2␻1t+2␪(t)−␾220]+1

2a2A22cos (2␻2t−␾202) +3

4a3A1(t)2A2cos [(2␻1−␻2)t+2␪(t)−␾32−1] +冋34a3A1(t)3+64a3A1(t)A22册cos [␻1t+␪(t)−␾310]

+冋64a3A1(t)2A2+34a3A32册cos (␻2t−␾301)

+3

4a3A1(t)A22cos [(2␻2−␻1)t−␪(t)−␾3−12] +1

4a3A1(t)3cos [3␻1t+3␪(t)−␾330] (1.26) +3

4a3A1(t)2A2cos [(2␻1+␻2)t+2␪(t)−␾321] +3

4a3A1(t)A22cos [(␻1+2␻2)t+␪(t)−␾312] +1

4a3A32cos (3␻2t−␾303)

where␾110=␻1␶1,␾101=␻2␶1, ␾2−11 =␻2␶2−␻1␶2, ␾220=2␻1␶2, ␾211=

␻1␶2+␻2␶2,␾202=2␻2␶2,␾32−1=2␻1␶3−␻2␶3,␾310=␻1␶3,␾301=␻2␶3,

␾3−12 = 2␻2␶3 − ␻1␶3, ␾330 = 3␻1␶3, ␾321 = 2␻1␶3+ ␻2␶3, ␾312 = ␻1␶3 + 2␻2␶3, and␾303=3␻2␶3, and whose inband components are only

a1A1(t) cos [␻1t+␪(t)−␾110] +a1A2cos (␻2t−␾101) +3

4a3A1(t)2A2cos [(2␻1−␻2)t+2␪(t) −␾32−1]

+冋34a3A1(t)3+64a3A1(t)A22册cos [␻1t +␪(t) −␾310] (1.27)

+冋64a3A1(t)2A2+34a3A32册cos (␻2t−␾301)

+3

4a3A1(t)A22cos [(2␻2−␻1)t−␪(t) −␾3−12]

As expected, (1.27) includes two linear outputs proportional to the first-degree coefficienta1, and six more nonlinear components arranged in four different fre- quencies. From these, the sideband components at 2␻1 − ␻2 and 2␻2 −␻1are usually known as the intermodulation distortion(IMD). Strictly speaking, every mixing product can be denominated anintermodulation componentsince it results from intermodulating two or more different tones. But, although it cannot also be said to be of uniform practice, the term IMD is usually reserved for those particular sideband components. Similarly to what we have already discussed for the

amplitude modulated one-tone excitation, they constitute a form of adjacent-chan- nel distortion.

Beyond these IMD products, (1.27) also shows four cochannel distortion com- ponents located around ␻1and␻2. Two of those are given as

3

4a3A1(t)3cos [␻1t +␪(t) −␾310] (1.28) and

3

4a3A32cos (␻2t−␾301) (1.29) which are similar in the form. They are both the cochannel distortion outcomes that would appear if the tones at␻1and␻2were used, one by one, as independent excitations. Noting that (1.28) can be rewritten as

冋34a3A1(t)2册 A1(t) cos [␻1t +␪(t) −␾310] (1.30)

and that A1(t)2 must include a dc term plus baseband and second harmonics of A(t) own frequency components, we must conclude that (1.28) actually includes many distortion components that are inherently distinct from the input, but also some other ones that constitute an exact replica of the input. In mathematical terms, this means that the cochannel distortion has components that are uncorrelated with the input and the linear output, and others that are correlated with these [2, 3].2 Since part of the output is uncorrelated with the input signal, it does not contain the desired information and thus behaves towards it as random noise. Its presence is a major source of perturbation to the processed data—a reason why it is some- times called intermodulation noise.

On the other hand, the correlated components carry exactly the same informa- tion as the linear output. The only difference they have to the true first-order components is that they are not a linear replica of the input as their proportionality constant, or gain, varies with the signal amplitude squared. That is, from a certain viewpoint, they should be considered nonlinear distortion since they are, actually, a nonlinear deviation of the ideal linear behavior. But, from another perspective, they can be also considered as useful signal since, added with the first-order linear components and the term proportional to A1(t)A22, they are simply making the overall system gain dependent on the average excitation power.

2. Rigorously speaking, two signals,x(t) andy(t), are said to be uncorrelated when the cross-correlation between them is zero:Rxy(␶)=兰−∞∞x(t)y(t+␶)dt=0. IfRxy(␶)≠0, the signals are correlated.

This duality of roles can be perfectly accepted if we think of what we expect from an electronic measurement system and from a wireless system. In the first case, since we want the system’s output to be a scaled replica of the measured quantity, any deviation from linearity is a direct source of measurement error.

Therefore, in this scenery, we would be pushed to consider those third-order signal correlated components as distortion. In the second case, since we are not too worried about the overall system gain, whose variations are, after all, generally corrected by anautomatic gain control(AGC) loop, we would be pushed to consider those components as desired signal and not distortion.

Because, in general, ␾110 is different from ␾310, and ␾101 is different from

␾301, the addition of the signal correlated third-order components to the linear components constitutes a vector addition, which means that variations in input amplitude will produce changes in output amplitude, but also in output phase.

These two effects, whose graphical illustration is depicted in Figure 1.10, are two of the most significant properties of nonlinear telecommunication systems.

They are traditionally characterized with sinusoidal excitations by the so-called AM-AM conversion—meaning that input amplitude modulation induces output amplitude modulation—and AM-PM conversion, which describes the way input amplitude modulation can also produce output phase modulation.

In general, since AM-AM and AM-PM conversions are driven by amplitude envelope variations, they could be induced by ␻1 onto ␻1and ␻2onto ␻2, but also from ␻2onto␻1and␻1onto ␻2. This is, for instance, the case of the term

6

4a3A1(t)2A2cos (␻2t −␾301) (1.31) where the amplitude variation of one of the signals (in the present example at␻1) induces amplitude and phase variations on the other (at␻2). In telecommunication systems this is known as cross-modulation, which is responsible for undesired channel cross-talk, as was already seen in Figure 1.6.

Figure 1.10 Illustration of AM-AM and AM-PM conversions in a nonlinear system driven by a signal of increasing amplitude envelope.y1(t): linear component;y3(t): third-order signal correlated distortion component;yr(t): resultant output component; and␾: resultant output phase.

Finally, the term 6

4a3A1(t)A22cos [␻1t +␪(t) −␾310] (1.32) is used to model desensitization—that is, the compression of gain (supposing a3 and␾310result in an opposing phase toa1and␾110), and thus system’s sensitivity degradation to one signal (in this case ␻1), caused by another one stronger in amplitude (at ␻2). When the difference in amplitudes between the desired signal and theinterferer is so high that a dramatic desensitization is noticed, the small- signal is said to be blocked and the interferer is named as a blocker or jammer.

Probably, the most obvious reflection of this desensitization or blocking effects is the dazzle we have all already experienced when a strong source of light is pointed at us at night.

Table 1.1 summarizes the above definitions by identifying all the distortion components present in the output of our third-degree polynomial subject to a two- tone excitation signal as (1.26).

Một phần của tài liệu intermodulation distortion in microwave and wireless circuits (Trang 25 - 37)

Tải bản đầy đủ (PDF)

(447 trang)