Archive for November, 2010

Problem 1.

Statement

A particle of mass is free to move along the x-direction such that . The state of the system is represented by the wavefunction Eq. (4.74)

with given by Eq. (4.59).

Note that I’ve inserted a factor above that isn’t in the text, because otherwise will not be unit normalized (assuming is normalized in wavenumber space).

\begin{itemize}
\item
(a) What is the group velocity associated with this state?
\item
(b) What is the probability for measuring the particle at position at time ?
\item
(c) What is the probability per unit length for measuring the particle at position at time ?
\item
(d) Explain the physical meaning of the above results.
\end{itemize}

Solution

(a). group velocity.

To calculate the group velocity we need to know the dependence of on .

Let’s step back and consider the time evolution action on . For the free particle case we have

Writing we have

Each successive application of will introduce another power of , so once we sum all the terms of the exponential series we have

Comparing with 1.1 we find

This completes this section of the problem since we are now able to calculate the group velocity

(b). What is the probability for measuring the particle at position at time ?

In order to evaluate the probability, it looks desirable to evaluate the wave function integral 1.4.
Writing , the exponent of that integral is

The portion of the exponential

then comes out of the integral. We can also make a change of variables to evaluate the remainder of the Gaussian and are left with

Observe that from 1.2 we can compute , which could be substituted back into 1.7 if desired.

Our probability density is

With a final regrouping of terms, this is

As a sanity check we observe that this integrates to unity for all as desired. The probability that we find the particle at position is then

The only simplification we can make is to rewrite this in terms of the complementary error function

Writing

we have

Sanity checking this result, we note that since the probability for finding the particle in the range is as expected.

(c). What is the probability per unit length for measuring the particle at position at time ?

This unit length probability is thus

(d). Explain the physical meaning of the above results.

To get an idea what the group velocity means, observe that we can write our wavefunction 1.1 as

We see that the phase coefficient of the Gaussian “moves” at the rate of the group velocity . Also recall that in the text it is noted that the time dependent term 1.11 can be expressed in terms of position and momentum uncertainties , and . That is

This makes it evident that the probability density flattens and spreads over time with the rate equal to the uncertainty of the group velocity (since ). It is interesting that something as simple as this phase change results in a physically measurable phenomena. We see that a direct result of this linear with time phase change, we are less able to find the particle localized around it’s original time position as more time elapses.

Problem 2.

Statement

A particle with intrinsic angular momentum or spin is prepared in the spin-up with respect to the z-direction state . Determine

and

and explain what these relations say about the system.

Solution: Uncertainty of with respect to

Noting that we have

The average outcome for many measurements of the physical quantity associated with the operator when the system has been prepared in the state is .

We could also compute this from the matrix representations, but it is slightly more work.

Operating once more with on the zero ket vector still gives us zero, so we have zero in the root for 2.16

What does 2.20 say about the state of the system? Given many measurements of the physical quantity associated with the operator , where the initial state of the system is always , then the average of the measurements of the physical quantity associated with is zero. We can think of the operator as a representation of the observable, “how different is the measured result from the average ”.

So, given a system prepared in state , and performance of repeated measurements capable of only examining spin-up, we find that the system is never any different than its initial spin-up state. We have no uncertainty that we will measure any difference from spin-up on average, when the system is prepared in the spin-up state.

Solution: Uncertainty of with respect to

For this second part of the problem, we note that we can write

So the expectation value of with respect to this state is

After repeated preparation of the system in state , the average measurement of the physical quantity associated with operator is zero. In terms of the eigenstates for that operator and we have equal probability of measuring either given this particular initial system state.

For the variance calculation, this reduces our problem to the calculation of , which is

so for 2.22 we have

The average of the absolute magnitude of the physical quantity associated with operator is found to be when repeated measurements are performed given a system initially prepared in state . We saw that the average value for the measurement of that physical quantity itself was zero, showing that we have equal probabilities of measuring either for this experiment. A measurement that would show the system was in the x-direction spin-up or spin-down states would find that these states are equi-probable.

Grading comments.

I lost one mark on the group velocity response. Instead of 3.23 he wanted

since peaks at .

I’ll have to go back and think about that a bit, because I’m unsure of the last bits of the reasoning there.

I also lost 0.5 and 0.25 (twice) because I didn’t explicitly state that the probability that the particle is at , a specific single point, is zero. I thought that was obvious and didn’t have to be stated, but it appears expressing this explicitly is what he was looking for.

Curiously, one thing that I didn’t loose marks on was, the wrong answer for the probability per unit length. What he was actually asking for was the following

That’s a whole lot more sensible seeming quantity to calculate than what I did, but I don’t think that I can be faulted too much since the phrase was never used in the text nor in the lectures.

Setup.

It is relevant to describing the oscillation of molecules, quantum states of light, vibrations of the lattice structure of a solid, and so on.

FIXME: projected picture of masses on springs, with a ladle shaped well, approximately Harmonic about the minimum of the bucket.

The problem to solve is the one dimensional Hamiltonian

where is the mass, is the frequency, is the position operator, and is the momentum operator. Of these quantities, and are classical quantities.

This problem can be used to illustrate some of the reasons why we study the different pictures (Heisenberg, Interaction and Schr\”{o}dinger). This is a problem well suited to all of these (FIXME: lookup an example of this with the interaction picture. The book covers H and S methods.

We attack this with a non-intuitive, but cool technique. Introduce the raising and lowering operators:

\paragraph{Question:} are we using the dagger for more than Hermitian conjugation in this case.
\paragraph{Answer:} No, this is precisely the Hermitian conjugation operation.

Solving for and in terms of and , we have

or

Express in terms of and

Since then we can show that . Solve for as follows

Comparing LHS and RHS we have as stated

and thus from 8.175 we have

Let be the eigenstate of so that . From 8.177 we have

or

We wish now to find the eigenstates of the “Number” operator , which are simultaneously eigenstates of the Hamiltonian operator.

Observe that we have

where we used .

or

The new state is presumed to lie in the same space, expressible as a linear combination of the basis states in this space. We can see the effect of the operator on this new state, we find that the energy is changed, but the state is otherwise unchanged. Any state is an eigenstate of , and therefore also an eigenstate of the Hamiltonian.

Play the same game and win big by discovering that

There will be some state such that

which implies

so from 8.180 we have

Observe that we can identify for

or

or

where .

We can write

or

We call this operator , the number operator, so that

Relating states.

Recall the calculation we performed for

Where , and are constants. The next game we are going to play is to work out for the lowering operation

and the raising operation

For the Hermitian conjugate of we have

So

Expanding the LHS we have

For

Similarly

and

for

Heisenberg picture.

\paragraph{How does the lowering operator evolve in time?}

\paragraph{A:} Recall that for a general operator , we have for the time evolution of that operator

Let’s solve this one.

Even though is an operator, it can undergo a time evolution and we can think of it as a function, and we can solve for in the differential equation

This has the solution

here is an operator, the value of that operator at . The exponential here is just a scalar (not effected by the operator so we can put it on either side of the operator as desired).

\paragraph{CHECK:}

A couple comments on the Schr\”{o}dinger picture.

We don’t do this in class, but it is very similar to the approach of the hydrogen atom. See the text for full details.

In the Schr\”{o}dinger picture,

This does directly to the wave function representation, but we can relate these by noting that we get this as a consequence of the identification .

In 11.204, we can switch to dimensionless quantities with

with

This gives, with ,

We can use polynomial series expansion methods to solve this, and find that we require a terminating expression, and write this in terms of the Hermite polynomials (courtesy of the clever French once again).

When all is said and done we will get the energy eigenvalues once again

Back to the Heisenberg picture.

Let us express

With

we have

and

Recall that our matrix operator is

We have then

NOTE: picture of the solution to this LDE on slide…. but I didn’t look closely enough.

Motivation.

Notes

Problems

Problem 2.

Statement.

On the basis of the results already derived for the harmonic oscillator, determine the energy eigenvalues and the ground-state wavefunction for the truncatedoscillator

Solution.

We require , so our solutions are limited to the truncated odd harmonic oscillator solutions. The normalization will be different since only the integration range is significant. Our energy eigenvalues are

And its wave function is

where is the first odd wavefunction for the non-truncated oscillator. Normalizing this we find , or

Problem 3.

Statement.

Show that for the harmonic oscillator in the state , the following uncertainty product holds.

Solution.

I tried this first explicitly with the first two wave functions

For the state we find easily that

and this is zero since we are integrating an odd function over an even range (presuming that we take the principle value of the integral).

For the state this we have

Since each is a polynomial times a factor we have for all states .

The momentum expecation values for states and are also fairly simple to compute. We have

For the state our derivative is odd since a factor of is brought down, and we are again integrating an odd function over an even range. For the case our derivative is proportional to

Again, this is an even function, while is odd, so we have zero. Noting that we can express each in terms of Hankel functions

where is even and is odd, we note that this expectation value will always be zero since we will have an even times odd function in the integration kernel.

Knowing that the position and momentum expectation values are zero reduces this problem to the calculation of . Either of these expecation values are again not too hard to compute for . However, we now have to keep track of the proportionality constants. As expected this yields

These are respectively

However, these integrals were only straightforward (albeit tedious) to calculate because we had explicit representations for and . For the general wave function, what we have to work with is either the Hankel function representation of 3.7 or the derivative form

Expanding this explicitly for arbitrary isn’t going to be feasible. We can reduce the scope of the problem by trying to be lazy and see how some work can be avoided. One possible trick is noting that we can express the squared momentum expectation in terms of the Hamiltonian

So we can get away with only calculating , an exersize in integration by parts

The second term in this remaining integral is proportional to , which leaves us with

Our squared momentum expectation value is then

This completes the problem, and we are left with

Problem 4.

Statement.

Consider the following two-dimensional harmonic oscilator problem:

where are the coordinates of the particle. Use the separation of variables technique to obtain the energy eigenvalues. Discuss the degeneracy in the eigenvalues if .

Solution.

Write . Substitute and dividing throughout by we have

Introduction of a pair of constants for each of the independent terms we have

For each of these equations we have a set of quantized eigenvalues and can write

The complete eigenstates are then

with total energy satisfying

A general state requires a double sum over the possible combinations of states , however if , we cannot distinguish between and based on the energy eigenvalues

In this case, we can write the wave function corresponding to a general state for the system as just . This reduction in the cardinality of this set of basis eigenstates is the degeneracy to be discussed.

Problem 5,6.

Statement.

Consider now a variation on Problem 4 in which we have a coupled oscillator with the potential given by

Obtain the energy eigenvalues by changing variables to such that the new potential is quadratic in , without the coupling term.

Solution.

This has the look of a diagonalization problem so we write the potential in matrix form

The similarity transformation required is

Our change of variables is therefore

Our Laplacian should also remain diagonal under this orthonormal transformation, but we can verify this by expanding out the partials explicitly

Squaring and summing we have

Our transformed Hamiltonian operator is thus

So, provided , the energy eigenvalue equation is given by 3.26 with , and .

Problem 7.

Statement.

Consider two coupled harmonic oscillators in one dimension of natural length and spring constant connecting three particles located at , and . The corresponding Schr\”{o}dinger equation is given as

Obtain the energy eigenvalues using the matrix method.

Solution.

Let’s start with an initial simplifying substutition to get rid of the factors of . Write

These were picked so that the differences in our quadratic terms involve only factors of

Schr\”{o}dinger’s equation is now

Putting our potential into matrix form, we have

This symmetric matrix, let’s call it M

has eigenvalues , with orthonormal eigenvectors

Writing

Writing , and , we see that the Laplacian has no mixed partial terms after transformation

Schr\”{o}dinger’s equation is then just

Or

Separation of variables provides us with one free particle wave equation, and two harmonic oscillator equations

We can borrow the Harmonic oscillator energy eigenvalues from problem 4 again with , and .

Problem 8.

Statement.

As a variation of Problem 7 assume that the middle particle at has a different mass . Reduce this problem to the form of Problem 7 by a scale change in and then use the matrix method to obtain the energy eigenvalues.

Solution.

We write , and then Schr\”{o}dinger’s equation takes the form

With , we have

We find that this symmetric matrix has eigenvalues , and eigenvectors

The rest of the problem is now no different than the tail end of Problem 7, and we end up with , .

Motivation.

For the hydrogen atom, after some variable substitutions the radial part of the Schr\”{o}dinger equation takes the form

In [1] it is argued that the functions are of the form

where is a polynomial in , specifically Laguerre polynomials. Let’s look at some of those details a bit more closely.

Guts

The first part of the argument comes from considering the case, where Schr\”{o}dinger’s equation is approximately

This large approximation has solutions , and we take the negative sign case as physically meaningful in order for the wave function to be normalizable.

Next it is argued that polynomial multiples of this will also be approximate solutions. Utilizing monomial multiple of the decreasing exponential as a trial solution, let’s compute how this fits into the radial Schr\”{o}dinger’s equation 1.1 above. Write

The derivatives are

and substitution yields

There are two things that this can show. The first is that for this produces a polynomial with degree and terms multiplied by the exponential, and we have approximately

The terms will dominate the polynomial, but the exponential dominate all, approaching zero for , just as the non-polynomial multiplied approximate solution will. This confirms that in the limit this polynomial multiplied exponential still has the desired behavior in the large limit. Also observe that in the limit of small we have approximately

Since as , we require either a different trial solution, or to have a normalizable wavefunction.

Before settling on let’s compute the derivatives for a more general trial function, of the form 1.2, and substitute those. After a bit of computation we find

Putting these together and substitution back into 1.1 yields

In the limit where the terms dominate 2.11 becomes

Again, this provides the or possibilities from the text, and we discard due to non-normalizability. A side question. How does one solve integer equations like this?

What remains?

With killing off the terms, what is our differential equation for ?

Comparing this to [2] we have something pretty close to the stated differential equation for the Laguerre polynomial. Ours is of the form

where the differential equation in the wikipedia article has . No change of variables involving a scalar multiplicative factor for appears to be able to get it into that form, and I am guessing this is the differential equation for the associated Laguerre polynomial (something not stated in the wikipedia article).

Let’s derive the recurrence relations for the coefficients, and work out the first few such polynomials to compare. Plugging in a polynomial of the form

where is assumed to be non-zero. We also assume that this polynomial is not an infinite series (ruling out the infinite series with convergence arguments is covered nicely in the text).

we have for 2.13

Observe first that since we have assumed , we must have . Requiring termwise equality with zero gives us the recurrance relation between the coefficents, for

Repeated application shows the pattern for these coefficients, and with we have

With

Or

Forming the complete series, we can get at the form of the associated Laguerre polynomials in the wikipedia article without too much trouble

Dropping the proportionality, this simplifies to just

This isn’t neccessarily the form of the polynomials used in the text. To see if that is the case, we need to check the normalization.

According to the wikipedia article we have for the associated Laguerre polynomials as defined above

whereas in the text we have

It seems clear that two different notations are being used. In this physical context of wave functions we want the normalization defined by

Using the wikipedia notation, with

we want

Since we have

It looks like there is probably some way to simplify this, and if so we’d be able to map the notation used (without definition) used in the text, to the notation used in the wikipedia article. If we don’t care about that, nor the specifics of the normalization constant then there’s not too much more to say.

This is an ugly kind of place to leave things, but that’s enough for today. It’s too bad that the text isn’t just more explicit, and it’s probably best to refer elsewhere for any more detail. With no specifics about the functions themselves in any form, one has to do that anyways.

Introduce the center of mass coordinates.

We’ll want to solve this using the formalism we’ve discussed. The general problem is a proton, positively charged, with a nearby negative charge (the electron).

Our equation to solve is

Here is the total kinetic energy term.
For hydrogen we can consider the potential to be the Coulomb potential energy function that depends only on . We can transform this using a center of mass transformation. Introduce the centre of mass coordinate and relative coordinate vectors

The notation represents the Laplacian for the positions of the k’th particle, so that if is the position of the first particle, the Laplacian for this is:

Here is the center of mass coordinate, and is the relative coordinate. With this transformation we can reduce the problem to a single coordinate PDE.

We set and , and get

and

where is the total mass, and is the reduced mass.

Aside: WHY do we care (slide of Hydrogen line spectrum shown)? This all started because when people looked at the spectrum for the hydrogen atom, a continuous spectrum was not found. Instead what was found was quantized frequencies. All this abstract Hilbert space notation with its bras and kets is a way of representing observable phenomina.

Also note that we have the same sort of problems in electrodynamics and mechanics, so we are able to recycle this sort of work, either applying it in those problems later, or using those techniques here.

In Electromagnetism these are the problems involving the solution to

or for

where is the electric field and is the electric potential.

We need sol solve 6.127 for . In spherical coordinates

where

This all follows by the separation of variables technique that we’ll use here, in E and M, in PDEs, and so forth.

FIXME: picture drawn. Theta measured down from axis to the position and measured in the plane measured in the to orientation.

Large limit.

but keep and note that is also a solution in the limit of , where is a polynomial.

Let where .

Small limit.

We also want to consider the small limit, and piece together the information that we find. Think about the following. The small or limit gives

\paragraph{Question:} Is this correct?

Not always. Also: we will also think about the case later (where would probably need to be retained.)

We need:

Instead of using 6.142 as in the text, we must substuitute into the above to find

for this equality for all we need

Solutions and can be found to this, and we need s positive for normalizability, which implies

Now we need to find what restrictions we must have on . Recall that we have . Substutition into 6.142 gives

We get

For this to be valid for all ,

or

For large we have

Recall that for the exponential Taylor series we have

for which we have

is behaving like , and if we had that

This is divergent, so for normalizable solutions we require to be a polynomial of a finite number of terms.

The polynomial must stop at , and we must have

From 6.150 we have

so we require

Let , an integer and so that says for

If

we have

where is the Bohr radius, and . In the lecture was originally used for the reduced mass. I’ve switched to earlier so that this cannot be mixed up with this use of for the azimuthal quantum number associated with .

PICTURE ON BOARD. Energy level transitions on graph with differences between to shown, and photon emitted as a result of the to transition.

From Chapter 4 and the story of the spherical harmonics, for a given , the quantum number varies between and in integer steps. The radial part of the solution of this separtion of variables problem becomes

where the functions are the Laguerre polynomials, and our complete wavefunction is

Chapter VI.

\begin{itemize}
\item Page 120. in bold. not in bold.
\item Page 123. (6.26). factor missing on RHS.
\item Page 124. Text before (6.37). You say canonical momenta , but call these mechanical momenta on prev page.
\item Page 125. (6.41). Some s are in bold.
\item Page 126. (6.49). There’s no mention that is constant, leaving it unclear how the gauge condition and how the curl of reproduces . This would also help clarify how you are able to write .
\item Page 128. (6.65). should be .
\item Page 129. (6.75). should be .
\end{itemize}

I’ve got a raw stackdump to deal with in a stack corruption issue, but didn’t know the stack layout for the linux amd64 ABI. On AIX, we find nice pairs of stackframe-address/instruction-addresses, and kind of expected to see that on linux too, but didn’t. To see how this works, I compiled a simple program, and walked through a call in the debugger.

The stack pointer had been decremented. Is this a stack frame allocated for the bar1() function itself, or a decrement in preparation for a save of the return address value?

One more instruction step gets me into the new function. This appears to automatically decrement my stack pointer $rsp 8 more bytes, and I now have the return address (0x400705) in the stack in the first 64-bits:

Does the linux (amd64) ABI require every function to allocate a stack frame, even if they don’t use it? I see this in bar1()’s code despite compiling with -O. It appears that the ret instruction also pops from the stack, incrementing $rsp as it branches to that address. It also looks like what we require to find out return address is to look at the pre-decrement value of $rsp (or calculate that), so to navigate the stack manually in a corrupted stack dump, we’ve got to disassemble to know where to look for the next return address. That’s much messier than on AIX where we can look for the longest chain of stackframe/saved-link-addresses to attempt to find a pre-corruption point in the code to try to pinpoint where things went wrong.

Problem:

What ware the eigenvalues and eigenvectors for an electron trapped in a 1D potential well?

MODEL.

Quantum state describes the particle. What should we choose? Try a quantum well with infinite barriers first.

These spherical quantum dots are like quantum wells. When you trap electrons in this scale you’ll get energy quantization.

VISUALIZE.

Draw a picture for with infinite spikes at . (ie: figure 8.1 in the text).

SOLVE.

First task is to solve the time independent Schr\”{o}dinger equation.

derivable from

In the position representation, we project onto and solve for . For the problems in Chapter 8,

where

We should be careful to be strict about the notation, and not interchange the operators and their specific representations (ie: not interchanging “little-x” and “big-x”) as we see in the text in this chapter.

Here the potential energy operator is time independent.

If and is time independent then implies

or

Here is the energy eigenvalue, and is the energy eigenstate. Our differential equation now becomes

where for . We won't find anything like this for real, but this is our first approximation to the quantum dot.

Our differential equation in the well is now

or with

Our solution for is then

and for we have since .

Setting we have

Type I.

, . For we must have

or , where , so our solution is

Type II.

, . For we must have

or , where , so our solution is

Via determinant

We could also write

and then must have zero determinant, or

so we must have

or

regardless of and . We can then determine the solutions 5.106, and 5.107 simply by noting that this value for kills off either the sine or cosine terms of 5.105 depending on whether is even or odd.

CHECK.

satisfy the time independent Schr\”{o}dinger equation, and the corresponding eigenvalues from from

or

for .

On the derivative of at the boundaries

Integrating

over we have

which gives us

or

We can infer how the derivative behaves over the potential discontinuity, so in the limit where we must have wave function continuity at despite the potential discontinuity.

This sort of analysis, which is potential dependent, we see that for this infinite well potential, our derivative must be continuous at the boundary.

Problem:

non-infinite step well potential.

Given a zero potential in the well

and outside of the well

Inside of the well, we have the solution worked previously, with

Then we have outside of the well the same form

With , this is

If , we have , and the states are “bound” or “localized” in the well.

Our solutions for this case are then

for , and respectively.

Question: Why can we not have

for ?

Answer: As we would then have

This is a non-physical solution, and we discard it based on our normalization requirement.

Our total solution, in regions respectively

To find the coefficients, set , , , and NORMALIZE .

Now, how about in region 2 (), implies that our equation is

We no longer have quantized energy for such a solution. These correspond to the “unbound” or “continuum” states. Even though we do not have quantized energy we still have quantum effects. Our solution becomes

Question. Why no , in the term?

Answer. We can, but this is not physically relevant. Why is because we associate with an incoming wave, with reflection in the interval, and both in the $latex {\left\lvert{x}\right\rvert} a$ region.

FIXME: scan picture: 9.1 in my notebook.

Observe that this is not normalizable as is. We require “delta-function” normalization. What we can do is ask about current densities. How much passes through the barrier, and so forth.

Note to self. We probably really we want to consider a wave packet of states, something like:

Then we’d have something that we can normalize. Play with this later.

Setup for next week’s hydrogen atom lecture.

We’ll want to solve this using the formalism we’ve discussed. The general problem is a proton, positively charged, with a nearby negative charge (the electron).

Our equation to solve is

Here is the total kinetic energy term. For hydrogen we can consider the potential to be the Coulomb potential energy function that depends only on . We can transform this using a center of mass transformation. Introduce the centre of mass coordinate and relative coordinate vectors

With this transformation we can reduce the problem to a single coordinate PDE.

Problem 1.

A particle of mass is free to move along the x-direction such that . Express the time evolution operator defined by Eq. (2.166) using the momentum eigenstates with delta-function normalization. Find , where and are position eigenstates. What is the physical meaning of this expression?

Momentum matrix element.

We can expand the time evolution operator in series

We can now evaluate the momentum matrix element , which will essentially require the value of . That is

The momentum matrix element is therefore reduced to

Position matrix element.

For the position matrix element we have a similar sum

and require to continue. That is

Our position matrix element is therefore the differential operator

Physical interpretation of the position matrix element operator.

Finally, we need to determine the physical meaning of such a matrix element operator.

With the delta function that this matrix element operator includes it really only takes on a meaning with a convolution integral. The simplest such integral would be

or

The LHS has a physical meaning, and in the absolute square

provides the probability that the particle will be found in the region .

If we ignore the absolute square requirement and think of the (presumed normalized) wave function more loosely as representing a probability directly, then we can in turn give a meaning to the matrix element for the time evolution operator. This provides an operator valued weighting function that provides us with the probability that a particle initially at position will be at position at time . This probability is indirect since we need to absolute square and sum over a finite interval to obtain the probability of finding the particle in that interval.

Observe that the integral on the RHS of 1.3 is a summation over all , so we can think of this as adding the probabilities that the particle was at each point to arrive at the total probability for finding it at the new location . The time evolution operator matrix element provides the weighting in this conditional probability.

In 1.2 we found that the time evolution operators matrix element is differential operator in the position representation. In the general case this means that this probability weighting is not just numeric since the operation of the matrix element initial time wave function can produce wave functions for additional states. In some special cases, we may find that this weighting is strictly numeric, and one such example would be the Gaussian wave packet . Application of the differential operations would then produce polynomial weighted multiples of the original Gaussian. In this special case we would be able to write

Where is a polynomial valued function (and is in fact another exponential), and now just provides a numerical weighting for the conditional probability for the particle to move from to in time . In [1], this is called the Propagator function. It is perhaps justifiable to also call our similar operator valued matrix element a Propagator.

My grade.

I got full marks on this assignment. There’s apparently another way to do part of the first question on the position representation, and I was instructed by the TA to see the posted solution, which is not yet available.