Tag Archives: Differential Equations

A Narrow, Technical Problem in Partial Differential Equations

While I was in school, one of my professors set this problem to me and my classmates and challenged us to solve it over the next few days. I found the challenge intriguing and it fascinated me, so I thought it was worth sharing. The problem was this:


Show that

\displaystyle v(x,t) = \int_{-\infty}^{\infty} f(x-y,t)g(y)dy,    (1.1)

where \displaystyle g(y) has finite support and also satisfies the PDE

\displaystyle \frac{\partial v}{\partial t} = -\kappa \frac{\partial^{2}v}{\partial x^{2}}.   (1.2)



First off, what does finite support mean? Mathematically speaking, a function has support which is characterized by a subset of its domain whose members do not map to zero, and yet are finite. (Just as a quick note: much of the proper definitions require an understanding in mathematical analysis and measure theory, something which I have not studied in detail, so take that explanation with a grain of salt.)

As for the solution, we can rewrite the given PDE as

\displaystyle \frac{\partial v}{\partial t} - \kappa \frac{\partial^{2}v}{\partial x^{2}} = 0.    (2)

The PDE requires a first-order time derivative and a second-order spatial derivative.

\displaystyle \therefore \frac{\partial v}{\partial t} = \frac{\partial}{\partial t}\int_{-\infty}^{\infty} f(x-y,t)g(y)dy,   (3.1)

and

\displaystyle \frac{\partial^{2} v}{\partial x^{2}} = \frac{\partial^{2}}{\partial x^{2}}\int_{-\infty}^{\infty} f(x-y,t)g(y)dy.    (3.2)

Next, we substitute Eqs. (3.1) and (3.2) into Eq.(2), yielding

\displaystyle \frac{\partial}{\partial t}\int_{-\infty}^{\infty} f(x-y,t)g(y)dy -\kappa \frac{\partial^{2}}{\partial x^{2}}\int_{-\infty}^{\infty} f(x-y,t)g(y)dy = 0.    (4)

Note that taking the derivative of a function and then integrating that function is equivalent to integrating the function and differentiating the same function, in conjunction with the fact that the sum or difference of the integrals is the integral of the sum or difference (proofs of these facts are typically covered in a course in real analysis). Taking advantage of these gives

\displaystyle \int_{-\infty}^{\infty} \bigg\{\frac{\partial}{\partial t}f(x-y,t)-\kappa\frac{\partial^{2}}{\partial x^{2}}f(x-y,t)\bigg\}g(y)dy = 0.   (5)

Notice that the terms contained in the brackets equate to \displaystyle 0. This means that

\displaystyle \int_{-\infty}^{\infty} 0 \cdot g(y)dy = 0.   (6)

This implies that the function \displaystyle v(x,t) does satisfy the given PDE (Eq.(2)).




References:

Definition of Support in Mathematics: https://en.wikipedia.org/wiki/Support_(mathematics)

Derivation of the Finite-Difference Equations

In my final semester, my course load included a graduate course that had two modules: astronomical instrumentation and numerical modeling. The latter focused on developing the equations of motion of geophysical fluid dynamics (See Research in Magnetohydrodynamics). Such equations are then converted into an algorithm based on a specific type of numerical method of solving the exact differential equation.

The purpose of this post is to derive the finite-difference equations. Specifically, I will be deriving the forward, backward, centered first order equations. We start with the Taylor expansion about the points x_{0}= \pm h:

\displaystyle f(x+h)=\sum_{n=1}^{\infty}\frac{h^{n}}{n!}\frac{d^{n}f}{dx^{n}}, (1)

and

\displaystyle f(x-h)=\sum_{n=1}^{\infty}(-1)^{n}\frac{h^{n}}{n!}\frac{d^{n}f}{dx^{n}}. (2)

Let f(x_{j})=f_{j}, f(x_{j}+h)=f_{j+1}, f(x_{j}-h)=f_{j-1}. Therefore, if we consider the following differences…

\displaystyle f_{j+1}-f_{j}=f^{\prime}_{j}+f^{\prime \prime}_{j}\frac{h^{2}}{2!}+...+f^{n}_{j}\frac{h^{n}}{n!}, (3)

and

\displaystyle f_{j}-f_{j-1}=hf^{\prime}_{j}-\frac{h^{2}}{2!}f^{\prime \prime}_{j}+...\mp \frac{h^{n}}{n!}f^{n}_{j}, (4)

and

\displaystyle f_{j+1}-f_{j-1}=2hf^{\prime}_{j}+\frac{2h^{3}}{3!}f^{\prime\prime\prime}_{j}+..\mp \frac{h^{n}}{n!}f^{n}_{j}, (5)

and if we keep only linear terms, we get

\displaystyle f^{\prime}_{j}=\frac{f_{j+1}-f_{j}}{h}+\mathcal{O}(h), (6)

\displaystyle f^{\prime}_{j}=\frac{f_{j}-f_{j-1}}{h}+\mathcal{O}(h), (7)

and

\displaystyle f^{\prime}_{j}=\frac{f_{j+1}-f_{j-1}}{2h}+\mathcal{O}(h)

where the first is the forward difference, the second is the backward difference, the last is the centered difference, and \mathcal{O}(h) represents the quadratic, cubic, quartic,quintic,etc. terms. One can use similar logic to derive the second-order finite-difference equations.



 

Deriving the Bessel Function of the First Kind for Zeroth Order

NOTE: I verified the solution using the following text: Boyce, W. and DiPrima, R. Elementary Differential Equations. 

In this post, I shall be deriving the Bessel function of the first kind for the zeroth order Bessel differential equation. Bessel’s equation is encountered when solving differential equations in cylindrical coordinates and is of the form

\displaystyle x^{2}\frac{d^{2}y}{dx^{2}}+x\frac{dy}{dx}+(x^{2}-\nu^{2})y(x)=0, (1)

where \nu = 0 describes the order zero of Bessel’s equation. I shall be making use of the assumption

\displaystyle y(x)=\sum_{j=0}^{\infty}a_{j}x^{j+r}, (2)

where upon taking the first and second order derivatives gives us

\displaystyle \frac{dy}{dx}=\sum_{j=0}^{\infty}(j+r)a_{j}x^{j+r-1}, (3)

and

\displaystyle \frac{d^{2}y}{dx^{2}}=\sum_{j=0}^{\infty}(j+r)(j+r-1)a_{j}x^{j+r-2}. (4)

Substitution into Eq.(1) and noting the order of the equation we arrive at

\displaystyle x^{2}\sum_{j=0}^{\infty}(j+r)(j+r-1)a_{j}x^{j+r-2}+x\sum_{j=0}^{\infty}(j+r)a_{j}x^{j+r-1}+x^{2}\sum_{j=0}^{\infty}a_{j}x^{j+r}=0. (5)

Distribution and simplification of Eq.(5) yields

\displaystyle \sum_{j=0}^{\infty}\bigg\{(j+r)(j+r-1)+(j+r)\bigg\}a_{j}x^{j+r}+\sum_{j=0}^{\infty}a_{j}x^{j+r+2}=0. (6)

If we evaluate the terms in which j=0 and j=1, we get the following

\displaystyle a_{0}\bigg\{r(r-1)+r\bigg\}x^{r}+a_{1}\bigg\{(1+r)r+(1+r)\bigg\}x^{r+1}+\sum_{j=2}^{\infty}\bigg\{[(j+r)(j+r-1)+(j+r)]a_{j}+a_{j-2}\bigg\}x^{j+r}=0, (7)

where I have introduced the dummy variable m=(j+r)-2 and I have shifted the indices downward by 2. Consider now the indicial equation (coefficients of a_{0}x^{r}),

\displaystyle r(r-1)+r=0, (8)

which upon solving gives r=r_{1}=r_{2}=0. We may determine the recurrence relation from summation terms from which we get

\displaystyle a_{j}(r)=\frac{-a_{j-2}(r)}{[(j+r)(j+r-1)+(j+r)]}=\frac{-a_{j-2}(r)}{(j+r)^{2}}. (9)

To determine J_{0}(x) we let r=0 in which case the recurrence relation becomes

\displaystyle a_{j}=\frac{-a_{j-2}}{j^{2}}, (10)

where j=2,4,6,.... Thus we have

\displaystyle J_{0}(x)=a_{0}x^{0}+a_{1}x+...  (11)

The only way the second term above is 0 is if a_{1}=0. So, the successive terms are a_{3},a_{5},a_{7},..., = 0. Let j=2k, where k\in \mathbb{Z}^{+}, then the recurrence relation is again modified to

\displaystyle a_{2k}=\frac{-a_{2k-2}}{(2k)^{2}}. (12)

 In general, for any value of k, one finds the expression

\displaystyle ... \frac{(-1)^{k}a_{0}x^{2k}}{2^{2k}(k!)^{2}}. (13)

Thus our solution for the Bessel function of the first kind is

\displaystyle J_{0}(x)=a_{0}\bigg\{1+\sum_{k=1}^{\infty}\frac{(-1)^{k}x^{2k}}{2^{2k}(k!)^{2}}\bigg\}. (14)

Consequences and some Elementary Theorems of the Ideal One-Fluid Magnetohydrodynamic Equations

SOURCE FOR CONTENT:

Priest, E. Magnetohydrodynamics of the Sun, 2014. Cambridge University Press. Ch.2.;

Davidson, P.A., 2001. An Introduction to Magnetohydrodynamics. Ch.4. 

We have seen how to derive the induction equation from Maxwell’s equations assuming no charge and assuming that the plasma velocity is non-relativistic. Thus, we have the induction equation as being

\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B})+\lambda \nabla^{2}\textbf{B}. (1)

Many texts in MHD make the comparison of the induction equation to the vorticity equation

\displaystyle \frac{\partial \Omega}{\partial t}= \nabla \times (\textbf{v} \times \Omega)+\nu \nabla^{2}\Omega, (2)

where I have made use of the vector identity

\nabla \times (\textbf{X}\times \textbf{Y})=\textbf{X}(\nabla \cdot \textbf{Y})-\textbf{Y}(\nabla \cdot \textbf{X})+(\textbf{Y}\cdot \nabla)\textbf{X}-(\textbf{X}\cdot \nabla)\textbf{Y}.

Indeed, if we do compare the induction equation (Eq.(1)) to the vorticity equation (Eq.(2)) we easily see the resemblance between the two. The first term on the right hand side of Eq.(1)/ Eq.(2) determines the advection of magnetic field lines/vortex field lines; the second term on the right hand side deals with the diffusion of the magnetic field lines/vortex field lines.

From this, we can impose restrictions and thus look at the consequences of the induction equation (since it governs the evolution of the magnetic field). Furthermore, we see that we can modify the kinematic theorems of classical vortex dynamics to describe the properties of magnetic field lines. After discussing the direct consequences of the induction equation, I will discuss a few theorems of vortex dynamics and then introduce their MHD analogue.

Inherent to this is magnetic Reynold’s number. In geophysical fluid dynamics, the Reynolds number (not the magnetic Reynolds number) is a ratio between the viscous forces per volume and the inertial forces per volume given by

\displaystyle Re=\frac{ul}{V}, (3)

where u, l, V represent the typical fluid velocity, length scale and typical volume respectively. The magnetic Reynolds number is the ratio between the advective and diffusive terms of the induction equation. There are two canoncial regimes: (1) Re_{m}<<1, and (2)Re_{m}>>1 The former is sometimes called the diffusive limit and the latter is called either the Ideal limit or the infinite conductivity limit (I prefer to call it the ideal limit, since the terms infinite conductivity limit is not quite accurate).

 

Case I:  Re_{m}<<1

Consider again the induction equation

\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B})+\lambda\nabla^{2}\textbf{B}.

If we then assume that we are dealing with incompressible flows (i.e. (\nabla \cdot \textbf{v})=0) then we can use the aforementioned vector identity to write the induction equation as

\displaystyle \frac{D\textbf{B}}{Dt}=(\textbf{B}\cdot \nabla)\textbf{v}+\lambda\nabla^{2}\textbf{B}. (4)

In the regime for which Re_{m}<<1, the induction equation for incompressible flows (Eq.(4)) assumes the form

\displaystyle \frac{\partial \textbf{B}}{\partial t}=\lambda \nabla^{2}\textbf{B}. (5)

Compare this now to the following equation,

\displaystyle \frac{\partial T}{\partial t}=\alpha \nabla^{2}T. (6)

We see that the magnetic field lines are diffused through the plasma.

 

Case II: Re_{m}>>1

If we now consider the case for which the advective term dominates, we see that the induction equation takes the form

\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B}). (7)

Mathematically, what this suggests is that the magnetic field lines become “frozen-in” the plasma, giving rise to Alfven’s theorem of flux freezing.

Many astrophysical systems require a high magnetic Reynolds number. Such systems include the solar magnetic field (heliospheric current sheet), planetary dynamos (Earth, Jupiter, and Saturn), and galactic magnetic fields.

Kelvin’s Theorem & Helmholtz’s Theorem:

Kelvin’s Theorem: Consider a vortex tube in which we have that (\nabla \cdot \Omega)=0, in which case

\displaystyle \oint \Omega \cdot d\textbf{S}=0, (8)

and consider also the curve taken around a closed surface, (we call this curve a material curve C_{m}(t)) we may define the circulation as being

\displaystyle \Gamma = \oint_{C_{m}(t)}\textbf{v}\cdot d\textbf{l}. (9)

Thus, Kelvin’s theorem states that if the material curve is closed and it consists of identical fluid particles then the circulation, given by Eq.(9), is temporally invariant.

Helmholtz’s Theorem:

Part I: Suppose we consider a fluid element which lies on a vortex line at some initial time t=t_{0}, according to this theorem it states that this fluid element will continue to lie on that vortex line indefinitely.

Part II: This part says that the flux of vorticity

\displaystyle \Phi = \int \Omega \cdot d\textbf{S}, (10)

remains constant for each cross-sectional area and is also invariant with respect to time.

 

Now the magnetic analogue of Helmholtz’s Theorems are found to be Alfven’s theorem of flux freezing and conservation of magnetic flux, magnetic field lines, and magnetic topology.

The first says that fluid elements which lie along magnetic field lines will continue to do so indefinitely; basically the same for the first Helmholtz theorem.

The second requires a more detailed argument to demonstrate why it works but it says that the magnetic flux through the plasma remains constant. The third says that magnetic field lines, hence the magnetic structure may be stretched and deformed in many ways, but the magnetic topology overall remains the same.

The justification for these last two require some proof-like arguments and I will leave that to another post.

In my project, I considered the case of high magnetic Reynolds number in order to examine the MHD processes present in region of metallic hydrogen present in Jupiter’s interior.

In the next post, I will “prove” the theorems I mention and discuss the project.

Basic Equations of Ideal One-Fluid Magnetohydrodynamics: (Part V) The Energy Equations and Summary

SOURCE FOR CONTENT: Priest E., Magnetohydrodynamics of the Sun, 2014. Ch. 2. Cambridge University Press.

 

The final subset of equations deals with the energy equations. My undergraduate research did not take into account the thermodynamics of conducting fluid in order to keep the math relatively simple. However, in order to understand MHD one must take into account these considerations. Therefore, there are three essential equations that are indicative of the energy equations:

I. Heat Equation:

We may write this equation in terms of the entropy S as

\displaystyle \rho T \bigg\{\frac{\partial S}{\partial t}+(\nabla \cdot \textbf{v})S\bigg\}=-\mathcal{L}, (1)

where \mathcal{L} represents the net effect of energy sinks and sources and is called the energy loss function. For simplicity, one typically writes the form of the heat equation to be

\displaystyle \frac{\rho^{\gamma}}{\gamma -1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\mathcal{L}. (2)

2. Conduction

For this equation one considers the explicit form of the energy loss function as being

\displaystyle \mathcal{L}=\nabla \cdot \textbf{q}+L_{r}-\frac{J^{2}}{\sigma}-F_{H}, (3)

where \textbf{q} represents heat flux by particle conduction, L_{r} is the net radiation, J^{2}/\sigma is the Ohmic dissipation, and F_{H} represents external heating sources, if any exist.  The term \textbf{q} is given by

\textbf{q}=-\kappa \nabla T, (4)

where \kappa is the thermal conduction tensor.

3. Radiation

The equation for radiation can be written as a variation of the diffusion equation for temperature

\displaystyle \frac{DT}{Dt}=\kappa \nabla^{2}T (5)

where \kappa here denotes the thermal diffusivity given by

\displaystyle \kappa = \frac{\kappa_{r}}{\rho c_{P}}. (6)

We may write the final form of the energy equation as

\displaystyle \frac{\rho^{\gamma}}{\gamma-1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\nabla \cdot \textbf{q}-L_{r}+J^{2}/\sigma+F_{H}, (7)

where \textbf{q} is given by Eq.(4).

As far as my undergraduate research is concerned, I am including these equations to be complete.

 

So to summarize the series so far, I have derived most of the basic equations of ideal one-fluid model of magnetohydrodynamics. The equations are

\displaystyle \frac{\partial \textbf{B}}{\partial t}=(\textbf{v}\times \textbf{B})+\lambda \nabla^{2}\textbf{B}, (A)

\displaystyle \frac{\partial \textbf{v}}{\partial t}+(\nabla \cdot \textbf{v})\textbf{v}=-\frac{1}{\rho}\nabla\bigg\{P+\frac{B^{2}}{2\mu_{0}}\bigg\}+\frac{(\nabla \cdot \textbf{B})\textbf{B}}{\mu_{0}\rho}, (B)

\displaystyle \frac{\partial \rho}{\partial t}+(\nabla \cdot \rho\textbf{v})=0, (C)

\displaystyle \frac{\partial \Omega}{\partial t}+(\nabla \cdot \textbf{v})\Omega = (\nabla \cdot \Omega)\textbf{v}+\nu \nabla^{2}\Omega, (D)

\displaystyle P = \frac{k_{B}}{m}\rho T = \frac{\tilde{R}}{\tilde{\mu}}\rho T, (E) (Ideal Gas Law)

and

\displaystyle \frac{\rho^{\gamma}}{\gamma-1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\nabla \cdot \textbf{q}-L_{r}+J^{2}/\sigma +F_{H}.  (F)

We also have the following ancillary equations

\displaystyle (\nabla \cdot \textbf{B})=0, (G.1)

since we haven’t found evidence of the existence of magnetic monopoles. We also have that

\displaystyle \nabla \times \textbf{B}=\mu_{0}\textbf{J}, (G.2)

where we are assuming that the plasma velocity v << c (i.e. non-relativistic). Finally for incompressible flows we know that (\nabla \cdot \textbf{v})=0 corresponding to isopycnal flows.

 

In the next post, I will discuss some of the consequences of these equations and some elementary theorems involving conservation of magnetic flux and magnetic field line topology.

Solution to the Hermite Differential Equation

One typically finds the Hermite differential equation in the context of an infinite square well potential and the consequential solution of the Schrödinger equation. However, I will consider this equation is its “raw” mathematical form viz.

\displaystyle \frac{d^{2}y}{dx^{2}}-2x\frac{dy}{dx}+\lambda y(x) =0. (1)

First we will consider the more general case, leaving \lambda undefined. The second case will consider in a future post \lambda = 2n, n\in \mathbb{Z}^{+}, where \mathbb{Z}^{+}=\bigg\{x\in\mathbb{Z}|x > 0\bigg\}.

PART I: 

Let us assume the solution has the form

\displaystyle y(x)=\sum_{j=0}^{\infty}a_{j}x^{j}. (2)

Now we take the necessary derivatives

\displaystyle y^{\prime}(x)=\sum_{j=1}^{\infty}ja_{j}x^{j-1}, (3)

\displaystyle y^{\prime \prime}(x)=\sum_{j=2}^{\infty} j(j-1)a_{j}x^{j-2}, (4)

where upon substitution yields the following

\displaystyle \sum_{j=2}^{\infty}j(j-1)a_{j}x^{j-2}-\sum_{j=1}^{\infty}2ja_{j}x^{j}+\sum_{j=0}^{\infty}\lambda a_{j}x^{j}=0, (5)

Introducing the dummy variable m=j-2 and using this and its variants we arrive at

\displaystyle \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}x^{j}-\sum_{j=0}^{\infty}2ja_{j}x^{j}+\sum_{j=0}^{\infty}\lambda a_{j}x^{j}=0. (6)

Bringing this under one summation sign…

\displaystyle \sum_{j=0}^{\infty}[(j+2)(j+1)a_{j+2}-2ja_{j}+\lambda a_{j}]x^{j}=0. (7)

Since \displaystyle \sum_{j=0}^{\infty}x^{j}\neq 0, we therefore require that

\displaystyle (j+2)(j+1)a_{j+2}=(2j - \lambda)a_{j}, (8)

or

\displaystyle a_{j+2}=\frac{(2j-\lambda)a_{j}}{(j+2)(j+1)}. (9)

This is our recurrence relation. If we let j=0,1,2,3,... we arrive at two linearly independent solutions (one even and one odd) in terms of the fundamental coefficients a_{0} and a_{1} which may be written as

\displaystyle y_{even}(x)= a_{0}\bigg\{1+\sum_{j=0}^{j/2}\frac{(-1)^{j}(\lambda -2j)!}{(j+2)!}x^{j}\bigg\}, (10)

and

\displaystyle y_{odd}(x)=a_{1}\bigg\{\sum_{j=0}^{(j-1)/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j}\bigg\}. (11)

Thus, our final solution is the following

\displaystyle y(x)=y_{even}(x)+y_{odd}(x), (12.1)

 

\displaystyle y(x)=a_{0}\bigg\{1+\sum_{j=0}^{j/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j+2}\bigg\}+a_{1}\bigg\{x+\sum_{j=1}^{(j-1)/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j+2}\bigg\}. (12.2)

 

 

 

Legendre Polynomials

Some time ago, I wrote a post discussing the solution to Legendre’s ODE. In that post, I discussed what is an alternative definition of Legendre polynomials in which I stated Rodriguez’s formula:

\displaystyle \frac{1}{2^{p}p!}\frac{d^{p}}{dx^{p}}\bigg\{(x^{2}-1)^{p}\bigg\}, (0.1)

where

\displaystyle P_{p}(x)=\sum_{n=0}^{\alpha}\frac{(-1)^{n}(2p-2n)!}{2^{p}{n!}(p-n)!(p-2n)!} (0.2),

and

\displaystyle P_{p}(x)=\sum_{n=0}^{\beta}\frac{(-1)^{n}(2p-2n)!}{2^{p}{n!}(p-n)!(p-2n)!} (0.3)

in which I have let \displaystyle \alpha=p/2 and \displaystyle \beta=(p-1)/2 corresponding to the even and odd expressions for the Legendre polynomials.

However, in this post I shall be using the approach of the generating function. This will be from a purely mathematical perspective, so I am not applying this to any particular topic of physics.

Consider a triangle with sides \displaystyle X,Y.Z and angles \displaystyle \theta, \phi, \lambda. The law of cosines therefore maintains that

\displaystyle Z^{2}=X^{2}+Y^{2}-2XY\cos{(\lambda)}. (1)

We can factor out \displaystyle X^{2} from the left hand side of Eq.(1), take the square root and invert this yielding

\displaystyle \frac{1}{Z}=\frac{1}{X}\bigg\{1+\bigg\{\frac{Y}{X}\bigg\}^{2}-2\bigg\{\frac{Y}{X}\bigg\}\cos{(\lambda)}\bigg\}^{-1/2}. (2)

Now, we can expand this by means of the binomial expansion. Let \displaystyle \kappa \equiv \bigg\{\frac{Y}{X}\bigg\}^{2}-2\bigg\{\frac{Y}{X}\bigg\}\cos{(\lambda)}, therefore the binomial expansion is

\displaystyle \frac{1}{(1+\kappa)^{1/2}}=1-\frac{1}{2}\kappa+\frac{3}{8}\kappa^{2}-\frac{5}{16}\kappa^{3}+... (3)

Hence if we expand this in terms of the sides and angle(s) of the triangle and group by powers of \displaystyle (y/x) we get

\displaystyle \frac{1}{Z}=\frac{1}{X}\bigg\{1+\bigg\{\frac{Y}{X}\bigg\}\cos{(\lambda)}+\bigg\{\frac{Y}{X}\bigg\}^{2}\frac{1}{2}(3\cos^{2}{(\lambda)}-1)+\bigg\{\frac{Y}{X}\bigg\}^{3}\frac{1}{2}(5\cos^{3}{(\lambda)}-3\cos{(\lambda)}\bigg\}.(4)

Notice the coefficients, these are precisely the expressions for the Legendre polynomials. Therefore, we see that

\displaystyle \frac{1}{Z}=\frac{1}{X}\bigg\{\sum_{l=0}^{\infty}\bigg\{\frac{Y}{X}\bigg\}^{l}P_{l}(\cos{(\lambda)}\bigg\}, (5)

or

\displaystyle \frac{1}{Z}=\frac{1}{\sqrt[]{X^{2}+Y^{2}-2XY\cos{(\lambda)}}}=\sum_{l=0}^{\infty}\frac{Y^{l}}{X^{l+1}}P_{l}(\cos{(\lambda)}. (6)

Thus we see that the generating function \displaystyle 1/Z generates the Legendre polynomials. Two prominent uses of these polynomials includes gravity and its application to the theory of potentials of a spherical mass distributions, and the other is that of electrostatics. For example, suppose we have the potential equation

\displaystyle V(r)=\frac{1}{4\pi\epsilon_{0}}\int_{V}\rho(R)\frac{\hat{\mathcal{R}}}{\mathcal{R_{0}}}d\tau. (7.1)

We may use the result of the generating function to get the following result for the electric potential due an arbitrary charge distribution

\displaystyle V(\mathcal{R})=\frac{1}{4\pi\epsilon_{0}}\sum_{l=0}^{\infty}\frac{\mathcal{R}^{l}}{\mathcal{R_{0}}^{l+1}}\int P_{l}(\cos{(\lambda)}). (7.2)

(For more details, see Chapter 3 of Griffith’s text: Introduction to Electrodynamics.)

 

Monte Carlo Simulations of Radiative Transfer: Basics of Radiative Transfer Theory (Part IIa)

SOURCES FOR CONTENT:

  1. Chandrasekhar, S., 1960. “Radiative Transfer”. Dover. 1.
  2. Choudhuri, A.R., 2010. “Astrophysics for Physicists”. Cambridge University Press. 2.
  3. Boyce, W.E., and DiPrima, R.C., 2005. “Elementary Differential Equations”. John Wiley & Sons. 2.1.

 

Recall from last time , the radiative transfer equation

\displaystyle \frac{1}{\epsilon \rho}\frac{dI_{\nu}}{ds}= M_{\nu}-N_{\nu}I_{\nu}, (1)

where M_{\nu} and N_{\nu} are the emission and absorption coefficients, respectively. We can further define the absorption coefficient to be equivalent to \epsilon \rho. Hence,

\displaystyle N_{\nu}=\frac{d\tau_{\nu}}{ds}, (2)

which upon rearrangement and substitution in Eq. (1) gives

\displaystyle \frac{dI_{\nu}(\tau_{\nu})}{d\tau_{\nu}}+I_{\nu}(\tau_{\nu})= U_{\nu}(\tau_{\nu}). (3)

We may solve this equation by using the method of integrating factors, by which we multiply Eq.(3) by some unknown function (the integrating factor) \mu(\tau_{\nu}) yielding

\displaystyle \mu(\tau_{\nu})\frac{dI_{\nu}(\tau_{\nu})}{d\tau_{\nu}}+\mu(\tau_{\nu})I_{\nu}(\tau_{\nu})=\mu(\tau_{\nu})U_{\nu}(\tau_{\nu}). (4)

Upon examining Eq.(4), we see that the left hand side is the product rule. It follows that

\displaystyle \frac{d}{d\tau_{\nu}}\bigg\{\mu(\tau_{\nu})I_{\nu}(\tau_{\nu})\bigg\}=\mu({\tau_{\nu}})U_{\nu}(\tau_{\nu}). (5)

This only works if  d(\mu(\tau_{\nu}))/d\tau_{\nu}=\mu(\tau_{\nu}). To show that this is valid, consider the equation for \mu(\tau_{\nu}) only:

\displaystyle \frac{d\mu(\tau_{\nu})}{d\tau_{\nu}}=\mu(\tau_{\nu}). (6.1)

This is a separable ordinary differential equation so we can rearrange and integrate to get

\displaystyle \int \frac{d\mu(\tau_{\nu})}{\mu(\tau_{\nu})}=\int d\tau_{\nu}\implies \ln(\mu(\tau_{\nu}))= \tau_{\nu}+C, (6.2)

where C is some constant of integration. Let us assume that the constant of integration is 0, and let us also take the exponential of (6.2). This gives us

\displaystyle \mu(\tau_{\nu})=\exp{(\tau_{\nu})}. (6.3)

This is our integrating factor. Just as a check, let us take the derivative of our integrating factor with respect to d\tau_{\nu},

\displaystyle \frac{d}{d\tau_{\nu}}\exp{(\tau_{\nu})}=\exp{(\tau_{\nu})},

Thus this requirement is satisfied. If we now return to Eq.(4) and substitute in our integrating factor we get

\displaystyle \frac{d}{d\tau_{\nu}}\bigg\{\exp{(\tau_{\nu})}I_{\nu}(\tau_{\nu})\bigg\}=\exp{(\tau_{\nu})}U_{\nu}(\tau_{\nu}). (7)

We can treat this as a separable differential equation so we can integrate immediately. However, we are integrating from an optical depth 0 to some optical depth \tau_{\nu}, hence we have that

\displaystyle \int_{0}^{\tau_{\nu}}d\bigg\{\exp{(\tau_{\nu})}I_{\nu}(\tau_{\nu})\bigg\}=\int_{0}^{\tau_{\nu}}\bigg\{\exp{(\bar{\tau}_{\nu})}U_{\nu}(\bar{\tau}_{\nu})\bigg\}d\bar{\tau}_{\nu}, (8)

We find that

\displaystyle \exp{(\tau_{\nu})}I_{\nu}(\tau_{\nu})-I_{\nu}(0)=\int_{0}^{\tau_{\nu}}\bigg\{\exp{(\bar{\tau}_{\nu})}U_{\nu}(\bar{\tau}_{\nu})\bigg\}d\bar{\tau}_{\nu} (9),

where if we add I_{\nu}(0) and divide by \exp{(\tau_{\nu})} we arrive at the general solution of the radiative transfer equation

\displaystyle I_{\nu}(\tau_{\nu}) = I_{\nu}(0)\exp{(-\tau_{\nu})}+\int_{0}^{\tau_{\nu}}\exp{(\bar{\tau}_{\nu}-\tau_{\nu})}U_{\nu}(\bar{\tau}_{\nu})d\bar{\tau}_{\nu}. (10)

This is the mathematically formal solution to the radiative transfer equation. While mathematically sound, much of the more interesting physical phenomena require more complicated equations and therefore more sophisticated methods of solving them (an example would be the use of quadrature formulae or n-th approximation for isotropic scattering).

Recall also that in general we can write the phase function p(\theta,\phi; \theta^{\prime},\phi^{\prime}) via the following

\displaystyle p(\theta,\phi;\theta^{\prime},\phi^{\prime})=\sum_{l=0}^{\infty}\gamma_{l}P_{l}(\cos{\Theta}). (11)

Let us consider the case for which l=0 in the sum given by (11). This then would mean that the phase function is constant

p(\theta,\phi;\theta^{\prime},\phi^{\prime})=\gamma_{0}=const. (12)

Such a phase function is consistent with isotropic scattering. The term isotropic means, in this context, that radiation scattered is the same in all directions. Such a case yields a source function of the form

\displaystyle U_{\nu}(\tau_{\nu})=\frac{1}{4\pi}\int_{0}^{\pi}\int_{0}^{2\pi}\gamma_{0}I_{\nu}(\tau_{\nu})\sin{\theta^{\prime}}d\theta^{\prime}d\phi^{\prime}, (13)

where upon use in the radiative transfer equation we get the integro-differential equation

\displaystyle \frac{dI_{\nu}(\tau_{\nu})}{d\tau_{\nu}}+I_{\nu}(\tau_{\nu})= \frac{1}{4\pi}\int_{0}^{\pi}\int_{0}^{2\pi}\gamma_{0}I_{\nu}(\tau_{\nu})\sin{\theta^{\prime}}d\theta^{\prime}d\phi^{\prime}. (14)

Solution of this equation is beyond the scope of the project. In the next post I will discuss Rayleigh scattering and the corresponding phase function.

 

 

Monte Carlo Simulations of Radiative Transfer: Basics of Radiative Transfer Theory (Part I)

SOURCE FOR CONTENT: Chandrasekhar, S., 1960. Radiative Transfer. 1. 

 

In this post, I will be discussing the basics of radiative transfer theory necessary to understand the methods used in this project. I will start with some definitions, then I will look at the radiative transfer equation and consider two simple cases of scattering.

The first definition we require is the specific intensity, which is the amount of energy associated with a specific frequency dE_{\nu} passing through an area dA constrained to a solid angle d\Omega in a time dt. We may write this mathematically as

dE_{\nu}=I_{\nu}\cos{\theta}d\nu d\Sigma d\Omega dt. (1)

We must also consider the net flux given by

\displaystyle d\nu d\Sigma dt \int I_{\nu}\cos{\theta}d\Omega, (2)

where if we integrate over all solid angles \Omega we get

\pi F_{\nu}=\displaystyle \int I_{\nu}\cos{\theta}d\Omega. (3)

Let d\Lambda be an element of the surface \Lambda in a volume V through which radiation passes. Further let \Theta and \theta denote the angles which form normals with respect to elements d\Lambda and d\Sigma. These surfaces are joined by these normals and hence we have the surface across which energy flows  includes the elements d\Lambda and d\Sigma, given by the following:

I_{\nu}\cos{\Theta}d\Sigma d\Omega^{\prime}d\nu = I_{\nu}d\nu \frac{\cos{\Theta}\cos{\theta}d\Sigma d\Lambda}{r^{2}} (4),

where d\Omega^{\prime}=d\Lambda \cos{\Theta}/r^{2} is the solid angle subtended by the surface element d\Lambda at a point P and volume element dV=ld\Sigma \cos{\theta} is the volume that is intercepted in volume V. If we take this further, and integrate over all V and \Omega we arrive at

\displaystyle \frac{d\nu}{c}\int dV \int I_{\nu} d\Omega=\frac{V}{c}d\nu \int I_{\nu}d\Omega, (5)

where if the radiation travels some distance L in the volume, then we must multiply Eq.(5) by l/c, where c is the speed of light.

We now define the integrated energy density as being

U_{\nu}=\displaystyle \frac{1}{c}\int I_{\nu}d\Omega, (6.1)

while the average intensity is

J_{\nu}=\displaystyle \frac{1}{4\pi}\int I_{\nu}d\nu, (6.2)

and the relation between these two equations is

U_{\nu}=\frac{4\pi}{c}J_{\nu}. (6.3)

I will now introduce the radiative transfer equation. This equation is a balance between the amount of radiation absorbed and the radiation that is emitted. The equation is,

\frac{dI_{\nu}}{ds}=-\epsilon \rho I_{\nu}+h_{\nu}\rho, (7)

where if we divide by \epsilon \rho we get

-\frac{1}{\epsilon_{\nu}\rho}\frac{dI_{\nu}}{ds}=I_{\nu}+U_{\nu}(\theta, \phi), (8)

where U(\theta,\phi) represents the source function given by

U_{\nu}(\theta,\phi)=\displaystyle \frac{1}{4\pi}\int_{0}^{\pi}\int_{0}^{2\pi}p(\theta,\phi;\theta^{\prime},\phi^{\prime})I_{\nu}\sin{\theta^{\prime}}d\theta^{\prime}d\phi^{\prime}. (9)

The source function is typically the ratio between the absorption and emission coefficients. One of the terms in the source function is the phase function which varies according to the specific scattering geometry. In its most general form, we can represent the phase function as an expansion of Legendre polynomials:

p(\theta, \phi; \theta^{\prime},\phi^{\prime})=\displaystyle \sum_{j=0}^{\infty}\gamma_{j}P_{j}(\mu), (10)

where we have let \mu = \cos{\theta} (in keeping with our notation in previous posts).

In Part II, we will discuss a few simple cases of scattering and their corresponding phase functions, as well as obtaining the formal solution of the radiative transfer equation. (DISCLAIMER: While this solution will be consistent in a mathematical sense, it is not exactly an insightful solution since much of the more interesting and complex cases involve the solution of either integro-differential equations or pure integral equations (a possible new topic).)

 

Simple Harmonic Oscillators (SHOs) (Part I)

We all experience or see this happening in our everyday experience: objects moving back and forth. In physics, these objects are called simple harmonic oscillators. While I was taking my undergraduate physics course, one of my favorite topics was SHOs because of the way the mathematics and physics work in tandem to explain something we see everyday. The purpose of this post is to engage followers to get them to think about this phenomenon in a more critical manner.

Every object has a position at which these objects tend to remain at rest, and if they are subjected to some perturbation, that object will oscillate about this equilibrium point until they resume their state of rest. If we pull or push an object with an applied force F_{A} we find that this force is proportional to Hooke’s law of elasticity, that is, F_{A}=-k\textbf{r}. If we consider other forces we also find that there exists a force balance between the restoring force (our applied force), a resistance force, and a forcing function, which we assume to have the form

F=F_{forcing}+F_{A}-F_{R}= -k\textbf{r}-\beta \dot{\textbf{r}}; (1)

note that we are assuming that the resistance force is proportional to the speed of an object. Suppose further that we are inducing these oscillations in a periodic manner by given by

F_{forcing}=F_{0}\cos{\omega t}. (2)

Now, to be more precise, we really should define the position vector. So, \textbf{r}=x\hat{i}+y\hat{j}+z\hat{k}. Therefore, we actually have a system of three second order linear non-homogeneous ordinary differential equations in three variables:

m\ddot{ x}+\beta \dot{x}+kx=F_{0}\cos{\omega t}, (3.1)

m\ddot{y}+\beta \dot{y}+ky=F_{0}\cos{\omega t}, (3.2)

m\ddot{z}+\beta \dot{z}+kz=F_{0}\cos{\omega t}. (3.3)

(QUICK NOTE: In the above equations, I am using the Newtonian notation for derivatives, only for convenience.)  I will just make some simplifications. I will divide both sides by the mass, and I will define the following parameters: \gamma \equiv \beta/m, \omega_{0} \equiv k/m, and \alpha \equiv F_{0}/m. Furthermore, I am only going to consider the y component of this system. Thus, the equation that we seek to solve is

\ddot{y}+\gamma \dot{y}+\omega_{0}y=\alpha\cos{\omega t}. (4)

Now, in order to solve this non-homogeneous equation, we use the method of undetermined coefficients. By this we mean to say that the general solution to the non-homogeneous equation is of the form

y = Ay_{1}(t)+By_{2}(t)+Y(t), (5)

where Y(t) is the particular solution to the non-homogeneous equation and the other two terms are the fundamental solutions of the homogeneous equation:

\ddot{y}_{h}+\gamma \dot{y}_{h}+\omega_{0} y_{h} = 0. (6)

Let y_{h}(t)=D\exp{(\lambda t)}. Taking the first and second time derivatives, we get \dot{y}_{h}(t)=\lambda D\exp{(\lambda t)} and \ddot{y}_{h}(t)=\lambda^{2}D\exp{(\lambda t)}. Therefore, Eq. (6) becomes, after factoring out the exponential term,

D\exp{(\lambda t)}[\lambda^{2}+\gamma \lambda +\omega_{0}]=0.  (7)

Since D\exp{(\lambda t)}\neq 0, it follows that

\lambda^{2}+\gamma \lambda +\omega_{0}=0. (8)

This is just a disguised form of a quadratic equation whose solution is obtained by the quadratic formula:

\lambda =\frac{-\gamma \pm \sqrt[]{\gamma^{2}-4\omega_{0}}}{2}. (9)

Part II of this post will discuss the three distinct cases for which the discriminant \sqrt[]{\gamma^{2}-4\omega_{0}} is greater than, equal to , or less than 0, and the consequent solutions. I will also obtain the solution to the non-homogeneous equation in that post as well.