# An Introduction…

A bit of background about myself: Since my sophomore year of high school I have been interested in astronomy, physics, and mathematics. I received my Bachelors degree in Earth and Planetary Sciences concentrating in astronomy/astrophysics from Western Connecticut State University. My coursework and independent readings ultimately led to minors in mathematics and physics.

My intention of this blog is to serve as a reference in astrophysics and related topics to myself as well as others. I aim to share my own research interests and consider selected problems that have fascinated me. I also hope to communicate recent news in the fields of physics and astronomy and discuss the implications of discoveries made.

DISCLAIMER: I am by no means an expert, and as such the posts that I create are of my opinion and my own logic. I may be wrong sometimes, and I hope that the people who see this (assuming that anyone sees this) will respect that.

That being said… Enjoy!

(ABOVE: An image of the moon taken with a lunar and planetary imaging camera mounted to a Newtonian 130mm reflecting telescope.)

# Method of Successive Approximations-Elementary Methods to Solving Integral Equations

SOURCES: Tricomi, F.G., Integral Equations. 1985. Dover. 1.

Wazwaz, A.M., A First Course in Integral Equations. 2015. 3.6

In regards to solving integral equations, there are two main types of equations: Volterra and Fredholm equations. The first of which is the topic of this post. First, a brief introduction. Just as there are equations which deal with unknown functions and their derivatives, there also exists another type of equation which involves the same function but include the integral of this function. There are also a class of equations called integro-differential equations (see my series on Monte Carlo and Radiative transfer), in which the equation deals with the function, its derivative, and its integral.

Here, I will be dealing with Volterra integral equations. More specifically, I will be considering the Volterra equation of the second kind (VESK) of the form:

$\displaystyle \alpha(x)=h(x)+\lambda\int_{0}^{x}K(x,y)\alpha(y)dy, (1)$

where $y\in [0,x]$. The first term of the integrand $K(x,y)$ denotes the kernel. The kernel of the integral arises from its conversion from an initial value problem. Indeed, solving the integral equation is equivalent to solving the initial value problem of a differential equation. The integral equation includes the initial conditions instead of being added in near the end of the solution of an IVP.

The fact that we are dealing with Volterra equations means that the kernel is subject to the condition:

$\displaystyle K(x.y); y > x.$

The typical way to solve this involves the method of successive approximations (some call this method Picard’s method of sucessive approximation). I first came across this method whilst taking my differential equations course. The context in which this arose was that of the existence and uniqueness of solutions to first order differential equations.

The method is as follows:

Suppose we have the equation given in (1). We then define an initial function $\alpha_{0}(x)=h(x).$ Then the next iteration of $\alpha$ is

$\displaystyle \alpha_{1}(x)=h(x)+\lambda\int_{0}^{x}K(x,y)h(y)dy. (2)$

Naturally, the next term $\alpha_{2}(x)$ is

$\displaystyle \alpha_{2}(x)=h(x)+\lambda\int_{0}^{x}K(x,y)f(y)dy+\lambda^{2}\int_{0}^{x}K(x,z)dz \int_{0}^{z}K(z,y)f(y)dy, (3)$

or more simply

$\displaystyle \alpha_{2}(x)=h(x)+\lambda\int_{0}^{x}K(x,y)\alpha_{1}(y)dy. (4)$

The reason I chose to include Eq.(3) is because while reading about this method I found that the traditional expression (Eq.(4)) left much to be desired. For me, it didn’t demonstrate the “successive” part of successive approximations. In general, we may become convinced that

$\displaystyle \alpha_{n}(x)=h(x)+\lambda\int_{0}^{x}K(x,y)\alpha_{n-1}(y)dy. (5)$

Once the general expression for $\alpha_{n}(x)$ is determined, we may determine the exact solution $\alpha(x)$ via

$\displaystyle \alpha(x)=\lim_{n\rightarrow \infty}\alpha_{n}(x). (6)$

This is probably one of the simpler methods of solving integral equations, since we do not require any real analysis, but we can obtain solutions for simple integral equations. I will discuss a few other methods of solution in future posts in this series.

# “Proof” of Alfven’s Theorem of Flux Freezing

SOURCE FOR CONTENT: Choudhuri, A.R., 2010. Astrophysics for Physicists. Ch. 8.

In the previous post we saw the consequences of different regimes of the magnetic Reynolds’ number under which either diffusion or advection of the magnetic field dominates. In this post, I shall be doing a “proof” of Alven’s Theorem of Flux Freezing. (I hesitate to call it a proof since it lacks the mathematical rigor that one associates with a proof.) Also note in this post, I will be working with the assumption of a high magnetic Reynolds number.

Alfven’s Theorem of Flux Freezing: Suppose we have a surface $S$ located within a plasma at some initial time $t_{0}$. From the theorem it is known that the flux of the associated magnetic field is linked with surface $S$ by

$\displaystyle \int_{S}\textbf{B}\cdot d\textbf{S}. (1)$

At some later time $t^{\prime}$, the elements of plasma contained within $S$ at $t_{0}$ move to some other point and will constitute some different surface $M$. The magnetic flux, linked to $M$ at $t^{\prime}$ by

$\displaystyle \int_{M}\textbf{B}\cdot d\textbf{M}, (2)$

from which we may mathematically state the theorem as

$\displaystyle \int_{S}\textbf{B}\cdot d\textbf{S}=\int_{M}\textbf{B}\cdot d\textbf{M}. (3)$

If we know that the magnetic field evolves in time in accordance to the induction equation we may express Eq.(3) as

$\displaystyle \frac{d}{dt}\int_{S}\textbf{B}\cdot d\textbf{S}=0. (4)$

To confirm that this is true, we note the two ways magnetic flux may change as being due to either (1) some intrinsic variability of the magnetic field strength or (2) movement of the surface. Therefore, either way it follows that

$\displaystyle \frac{d}{dt}\int_{S}\textbf{B}\cdot d\textbf{S}=\int_{S}\frac{\partial \textbf{B}}{\partial t}\cdot d\textbf{S}+\int_{S}\textbf{B}\cdot \frac{d}{dt}(d\textbf{S}). (5)$

Now, consider again the two surfaces. Let us suppose now that $M$ is displaced some distance relative to $S$. Further, let us also suppose that this displacement occurs during a time interval $t^{\prime}=t_{0}+\delta t.$ Additionally, if we imagine a cylinder formed by projecting a circular cross-section from one surface to the other, we may consider its length to be $\delta l$ with area given by the cross product: $-\delta t \textbf{v}\times \delta\textbf{l}$. Moreover, since we know that the area of integration is a closed region we see that the integral vanishes (goes to 0). Thus, we may write the difference

$\displaystyle d\textbf{M}-d\textbf{S}-\delta \oint \textbf{v}\times \delta \textbf{l}=0 (6).$

Recall the definition for a derivative, we may apply it to the second term on the right hand side of Eq.(5) to get

$\displaystyle \frac{d}{dt}(d\textbf{S})=\lim_{\delta t\rightarrow 0}\frac{d\textbf{M}-d\textbf{S}}{\delta t}=\oint \textbf{v}\times \delta \textbf{l}. (7)$

Thus the term becomes

$\displaystyle \int_{S}\textbf{B}\cdot \frac{d}{dt}(d\textbf{S})=\int \oint \textbf{B}\cdot (\textbf{v}\times \delta\textbf{l})=\int \oint (\textbf{B}\times \textbf{v})\cdot \delta l. (8)$

Since the integrals that exist interior to the boundary of the surface (call it path $C$) vanish and recall Stokes’ theorem

$\displaystyle \oint_{\partial \Omega}\textbf{F}\cdot d\textbf{r}=\int\int_{\Omega} (\nabla \times \textbf{F})\cdot d\Omega,$

and applying it to Eq.(8) we arrive at

$\displaystyle \frac{d}{dt}\int_{S}\textbf{B}\cdot d\textbf{S}=\oint_{\partial S}(\textbf{B}\times\textbf{v}) \cdot \delta \textbf{l}=\int\int_{S}\bigg\{\nabla \times (\textbf{B}\times\textbf{v})\bigg\}\cdot d\textbf{S}. (9)$

Recall that we are dealing with high magnetic Reynolds number, if we use the corresponding form of the induction equation in Eq.(9) we arrive at

$\displaystyle \frac{d}{dt}\int_{S}\textbf{B}\cdot d\textbf{S}=\int\int_{S}d\textbf{S}\cdot \bigg\{ \frac{\partial \textbf{B}}{\partial t}-\nabla\times (\textbf{v}\times \textbf{B})\bigg\}= 0. (10)$

Thus, this completes the “proof”.

# Deriving the Bessel Function of the First Kind for Zeroth Order

NOTE: I verified the solution using the following text: Boyce, W. and DiPrima, R. Elementary Differential Equations.

In this post, I shall be deriving the Bessel function of the first kind for the zeroth order Bessel differential equation. Bessel’s equation is encountered when solving differential equations in cylindrical coordinates and is of the form

$\displaystyle x^{2}\frac{d^{2}y}{dx^{2}}+x\frac{dy}{dx}+(x^{2}-\nu^{2})y(x)=0, (1)$

where $\nu = 0$ describes the order zero of Bessel’s equation. I shall be making use of the assumption

$\displaystyle y(x)=\sum_{j=0}^{\infty}a_{j}x^{j+r}, (2)$

where upon taking the first and second order derivatives gives us

$\displaystyle \frac{dy}{dx}=\sum_{j=0}^{\infty}(j+r)a_{j}x^{j+r-1}, (3)$

and

$\displaystyle \frac{d^{2}y}{dx^{2}}=\sum_{j=0}^{\infty}(j+r)(j+r-1)a_{j}x^{j+r-2}. (4)$

Substitution into Eq.(1) and noting the order of the equation we arrive at

$\displaystyle x^{2}\sum_{j=0}^{\infty}(j+r)(j+r-1)a_{j}x^{j+r-2}+x\sum_{j=0}^{\infty}(j+r)a_{j}x^{j+r-1}+x^{2}\sum_{j=0}^{\infty}a_{j}x^{j+r}=0. (5)$

Distribution and simplification of Eq.(5) yields

$\displaystyle \sum_{j=0}^{\infty}\bigg\{(j+r)(j+r-1)+(j+r)\bigg\}a_{j}x^{j+r}+\sum_{j=0}^{\infty}a_{j}x^{j+r+2}=0. (6)$

If we evaluate the terms in which $j=0$ and $j=1$, we get the following

$\displaystyle a_{0}\bigg\{r(r-1)+r\bigg\}x^{r}+a_{1}\bigg\{(1+r)r+(1+r)\bigg\}x^{r+1}+\sum_{j=2}^{\infty}\bigg\{[(j+r)(j+r-1)+(j+r)]a_{j}+a_{j-2}\bigg\}x^{j+r}=0, (7)$

where I have introduced the dummy variable $m=(j+r)-2$ and I have shifted the indices downward by 2. Consider now the indicial equation (coefficients of $a_{0}x^{r}$),

$\displaystyle r(r-1)+r=0, (8)$

which upon solving gives $r=r_{1}=r_{2}=0$. We may determine the recurrence relation from summation terms from which we get

$\displaystyle a_{j}(r)=\frac{-a_{j-2}(r)}{[(j+r)(j+r-1)+(j+r)]}=\frac{-a_{j-2}(r)}{(j+r)^{2}}. (9)$

To determine $J_{0}(x)$ we let $r=0$ in which case the recurrence relation becomes

$\displaystyle a_{j}=\frac{-a_{j-2}}{j^{2}}, (10)$

where $j=2,4,6,...$. Thus we have

$\displaystyle J_{0}(x)=a_{0}x^{0}+a_{1}x+... (11)$

The only way the second term above is 0 is if $a_{1}=0$. So, the successive terms are $a_{3},a_{5},a_{7},..., = 0$. Let $j=2k$, where $k\in \mathbb{Z}^{+}$, then the recurrence relation is again modified to

$\displaystyle a_{2k}=\frac{-a_{2k-2}}{(2k)^{2}}. (12)$

In general, for any value of $k$, one finds the expression

$\displaystyle ... \frac{(-1)^{k}a_{0}x^{2k}}{2^{2k}(k!)^{2}}. (13)$

Thus our solution for the Bessel function of the first kind is

$\displaystyle J_{0}(x)=a_{0}\bigg\{1+\sum_{k=1}^{\infty}\frac{(-1)^{k}x^{2k}}{2^{2k}(k!)^{2}}\bigg\}. (14)$

# Basics of Tensor Calculus and General Relativity-Vectors and Introduction to Tensors (Part II-Continuation of Vectors)

SOURCE FOR CONTENT: Neuenschwander, D.E., 2015. Tensor Calculus for Physics. Johns Hopkins University Press. Ch.1

In the preceding post of this series, we saw how we may define a vector in the traditional sense. There is another formulation which is the focus of this post. One becomes familiar with this formulation typically in a second course in quantum mechanics or of a similar form in an introductory linear algebra course.

A vector may be written in two forms: a ket vector,

$\displaystyle |A\rangle=\begin{pmatrix} a_{1}&\\ a_{2}&\\ \vdots &\\ a_{N} \end{pmatrix}, (1.1)$

or a bra vector (conjugate vector):

$\displaystyle \langle A|=\begin{pmatrix} a_{1} & a_{2} & \hdots & a_{N} \end{pmatrix}, (1.2)$

Additionally, if $\langle A| \in \mathbb{C}$, then the conjugate vector takes the form

$\displaystyle \langle A|=\begin{pmatrix} a_{1}^{*} & a_{2}^{*} & \hdots & a_{N}^{*} \end{pmatrix}. (1.3)$

In words, if the conjugate vector exists in the complex plane, we may express such a vector in terms of complex conjugates of the components.

We may form the inner product by the following

$\displaystyle \langle A|B\rangle = a_{1}^{*}b_{1}+a_{2}^{*}b_{2}+\hdots+a_{N}^{*}b_{M}=\sum_{i=1}^{N}\sum_{j=1}^{M}a_{i}^{*}b_{j}. (2)$

Conversely we may form the outer product as follows

$\displaystyle |A\rangle \langle B|= \begin{pmatrix} a_{1}b_{1}^{*} & a_{1}b_{2}^{*} & \hdots & a_{1}b_{M}^{*}\\ a_{2}b_{1}^{*} & a_{2}b_{2}^{*} & \hdots & a_{2}b_{M}^{*}\\ \vdots & \vdots & \ddots & \vdots \\ a_{N}b_{1}^{*} & a_{N}b_{2}^{*}& \hdots & a_{N}b_{M}^{*}\\ \end{pmatrix}, (3)$

Additionally, one typically defines a vector as a linear combination of basis vectors which we shall express by the following:

$\displaystyle \hat{i}=|1\rangle \equiv \begin{pmatrix} 1 \\ 0 \\ 0 \\ \end{pmatrix},$

$\displaystyle \hat{j} = |2\rangle \equiv \begin{pmatrix} 0 \\ 1 \\ 0 \\ \end{pmatrix},$

$\displaystyle \hat{k}=|3\rangle \equiv \begin{pmatrix} 0 \\ 0 \\ 1 \\ \end{pmatrix},$

where in general the basis vectors satisfy $\langle i|j \rangle = \delta_{ij}$. Therefore we can write any arbitrary vector in the following way

$\displaystyle \textbf{A}= a_{1}|1\rangle + a_{2}|2\rangle + ... + a_{n}|n\rangle. (4)$

Moreover, any conjugate vector may be written as

$\displaystyle \langle B|= b_{1}\langle 1| + b_{2}\langle 2| + ... + b_{m}\langle m|. (5)$

Let $\mathcal{P} = |A\rangle \langle B|$ represent the outer product which may also be written in component form as $\mathcal{P}_{ij}=\langle i|\mathcal{P}|j\rangle$. We therefore have the condition that

$\displaystyle \textbf{1} = \sum_{\mu}|\mu\rangle \langle \mu|, (6)$

and

$\displaystyle |A\rangle = \sum_{\mu}|\mu\rangle \langle \mu|A\rangle =\sum_{\mu}A^{\mu}|\mu\rangle, (7)$

wherein $\displaystyle A^{\mu}\equiv \langle \mu|A\rangle$. The first relation represents the completeness of an orthonormal set of basis vectors and the second is its modified form.

The next post will discuss the transformations of coordinates of vectors using both the matrix formulation and using partial derivatives.

# Consequences and some Elementary Theorems of the Ideal One-Fluid Magnetohydrodynamic Equations

SOURCE FOR CONTENT:

Priest, E. Magnetohydrodynamics of the Sun, 2014. Cambridge University Press. Ch.2.;

Davidson, P.A., 2001. An Introduction to Magnetohydrodynamics. Ch.4.

We have seen how to derive the induction equation from Maxwell’s equations assuming no charge and assuming that the plasma velocity is non-relativistic. Thus, we have the induction equation as being

$\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B})+\lambda \nabla^{2}\textbf{B}. (1)$

Many texts in MHD make the comparison of the induction equation to the vorticity equation

$\displaystyle \frac{\partial \Omega}{\partial t}= \nabla \times (\textbf{v} \times \Omega)+\nu \nabla^{2}\Omega, (2)$

where I have made use of the vector identity

$\nabla \times (\textbf{X}\times \textbf{Y})=\textbf{X}(\nabla \cdot \textbf{Y})-\textbf{Y}(\nabla \cdot \textbf{X})+(\textbf{Y}\cdot \nabla)\textbf{X}-(\textbf{X}\cdot \nabla)\textbf{Y}$.

Indeed, if we do compare the induction equation (Eq.(1)) to the vorticity equation (Eq.(2)) we easily see the resemblance between the two. The first term on the right hand side of Eq.(1)/ Eq.(2) determines the advection of magnetic field lines/vortex field lines; the second term on the right hand side deals with the diffusion of the magnetic field lines/vortex field lines.

From this, we can impose restrictions and thus look at the consequences of the induction equation (since it governs the evolution of the magnetic field). Furthermore, we see that we can modify the kinematic theorems of classical vortex dynamics to describe the properties of magnetic field lines. After discussing the direct consequences of the induction equation, I will discuss a few theorems of vortex dynamics and then introduce their MHD analogue.

Inherent to this is magnetic Reynold’s number. In geophysical fluid dynamics, the Reynolds number (not the magnetic Reynolds number) is a ratio between the viscous forces per volume and the inertial forces per volume given by

$\displaystyle Re=\frac{ul}{V}, (3)$

where $u, l, V$ represent the typical fluid velocity, length scale and typical volume respectively. The magnetic Reynolds number is the ratio between the advective and diffusive terms of the induction equation. There are two canoncial regimes: (1) $Re_{m}<<1$, and (2)$Re_{m}>>1$ The former is sometimes called the diffusive limit and the latter is called either the Ideal limit or the infinite conductivity limit (I prefer to call it the ideal limit, since the terms infinite conductivity limit is not quite accurate).

Case I:  $Re_{m}<<1$

Consider again the induction equation

$\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B})+\lambda\nabla^{2}\textbf{B}.$

If we then assume that we are dealing with incompressible flows (i.e. $(\nabla \cdot \textbf{v})=0$) then we can use the aforementioned vector identity to write the induction equation as

$\displaystyle \frac{D\textbf{B}}{Dt}=(\textbf{B}\cdot \nabla)\textbf{v}+\lambda\nabla^{2}\textbf{B}. (4)$

In the regime for which $Re_{m}<<1$, the induction equation for incompressible flows (Eq.(4)) assumes the form

$\displaystyle \frac{\partial \textbf{B}}{\partial t}=\lambda \nabla^{2}\textbf{B}. (5)$

Compare this now to the following equation,

$\displaystyle \frac{\partial T}{\partial t}=\alpha \nabla^{2}T. (6)$

We see that the magnetic field lines are diffused through the plasma.

Case II: $Re_{m}>>1$

If we now consider the case for which the advective term dominates, we see that the induction equation takes the form

$\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B}). (7)$

Mathematically, what this suggests is that the magnetic field lines become “frozen-in” the plasma, giving rise to Alfven’s theorem of flux freezing.

Many astrophysical systems require a high magnetic Reynolds number. Such systems include the solar magnetic field (heliospheric current sheet), planetary dynamos (Earth, Jupiter, and Saturn), and galactic magnetic fields.

Kelvin’s Theorem & Helmholtz’s Theorem:

Kelvin’s Theorem: Consider a vortex tube in which we have that $(\nabla \cdot \Omega)=0$, in which case

$\displaystyle \oint \Omega \cdot d\textbf{S}=0, (8)$

and consider also the curve taken around a closed surface, (we call this curve a material curve $C_{m}(t)$) we may define the circulation as being

$\displaystyle \Gamma = \oint_{C_{m}(t)}\textbf{v}\cdot d\textbf{l}. (9)$

Thus, Kelvin’s theorem states that if the material curve is closed and it consists of identical fluid particles then the circulation, given by Eq.(9), is temporally invariant.

Helmholtz’s Theorem:

Part I: Suppose we consider a fluid element which lies on a vortex line at some initial time $t=t_{0}$, according to this theorem it states that this fluid element will continue to lie on that vortex line indefinitely.

Part II: This part says that the flux of vorticity

$\displaystyle \Phi = \int \Omega \cdot d\textbf{S}, (10)$

remains constant for each cross-sectional area and is also invariant with respect to time.

Now the magnetic analogue of Helmholtz’s Theorems are found to be Alfven’s theorem of flux freezing and conservation of magnetic flux, magnetic field lines, and magnetic topology.

The first says that fluid elements which lie along magnetic field lines will continue to do so indefinitely; basically the same for the first Helmholtz theorem.

The second requires a more detailed argument to demonstrate why it works but it says that the magnetic flux through the plasma remains constant. The third says that magnetic field lines, hence the magnetic structure may be stretched and deformed in many ways, but the magnetic topology overall remains the same.

The justification for these last two require some proof-like arguments and I will leave that to another post.

In my project, I considered the case of high magnetic Reynolds number in order to examine the MHD processes present in region of metallic hydrogen present in Jupiter’s interior.

In the next post, I will “prove” the theorems I mention and discuss the project.

# Basic Equations of Ideal One-Fluid Magnetohydrodynamics: (Part V) The Energy Equations and Summary

SOURCE FOR CONTENT: Priest E., Magnetohydrodynamics of the Sun, 2014. Ch. 2. Cambridge University Press.

The final subset of equations deals with the energy equations. My undergraduate research did not take into account the thermodynamics of conducting fluid in order to keep the math relatively simple. However, in order to understand MHD one must take into account these considerations. Therefore, there are three essential equations that are indicative of the energy equations:

I. Heat Equation:

We may write this equation in terms of the entropy $S$ as

$\displaystyle \rho T \bigg\{\frac{\partial S}{\partial t}+(\nabla \cdot \textbf{v})S\bigg\}=-\mathcal{L}, (1)$

where $\mathcal{L}$ represents the net effect of energy sinks and sources and is called the energy loss function. For simplicity, one typically writes the form of the heat equation to be

$\displaystyle \frac{\rho^{\gamma}}{\gamma -1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\mathcal{L}. (2)$

2. Conduction

For this equation one considers the explicit form of the energy loss function as being

$\displaystyle \mathcal{L}=\nabla \cdot \textbf{q}+L_{r}-\frac{J^{2}}{\sigma}-F_{H}, (3)$

where $\textbf{q}$ represents heat flux by particle conduction, $L_{r}$ is the net radiation, $J^{2}/\sigma$ is the Ohmic dissipation, and $F_{H}$ represents external heating sources, if any exist.  The term $\textbf{q}$ is given by

$\textbf{q}=-\kappa \nabla T, (4)$

where $\kappa$ is the thermal conduction tensor.

The equation for radiation can be written as a variation of the diffusion equation for temperature

$\displaystyle \frac{DT}{Dt}=\kappa \nabla^{2}T (5)$

where $\kappa$ here denotes the thermal diffusivity given by

$\displaystyle \kappa = \frac{\kappa_{r}}{\rho c_{P}}. (6)$

We may write the final form of the energy equation as

$\displaystyle \frac{\rho^{\gamma}}{\gamma-1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\nabla \cdot \textbf{q}-L_{r}+J^{2}/\sigma+F_{H}, (7)$

where $\textbf{q}$ is given by Eq.(4).

As far as my undergraduate research is concerned, I am including these equations to be complete.

So to summarize the series so far, I have derived most of the basic equations of ideal one-fluid model of magnetohydrodynamics. The equations are

$\displaystyle \frac{\partial \textbf{B}}{\partial t}=(\textbf{v}\times \textbf{B})+\lambda \nabla^{2}\textbf{B}, (A)$

$\displaystyle \frac{\partial \textbf{v}}{\partial t}+(\nabla \cdot \textbf{v})\textbf{v}=-\frac{1}{\rho}\nabla\bigg\{P+\frac{B^{2}}{2\mu_{0}}\bigg\}+\frac{(\nabla \cdot \textbf{B})\textbf{B}}{\mu_{0}\rho}, (B)$

$\displaystyle \frac{\partial \rho}{\partial t}+(\nabla \cdot \rho\textbf{v})=0, (C)$

$\displaystyle \frac{\partial \Omega}{\partial t}+(\nabla \cdot \textbf{v})\Omega = (\nabla \cdot \Omega)\textbf{v}+\nu \nabla^{2}\Omega, (D)$

$\displaystyle P = \frac{k_{B}}{m}\rho T = \frac{\tilde{R}}{\tilde{\mu}}\rho T, (E)$ (Ideal Gas Law)

and

$\displaystyle \frac{\rho^{\gamma}}{\gamma-1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\nabla \cdot \textbf{q}-L_{r}+J^{2}/\sigma +F_{H}. (F)$

We also have the following ancillary equations

$\displaystyle (\nabla \cdot \textbf{B})=0, (G.1)$

since we haven’t found evidence of the existence of magnetic monopoles. We also have that

$\displaystyle \nabla \times \textbf{B}=\mu_{0}\textbf{J}, (G.2)$

where we are assuming that the plasma velocity $v << c$ (i.e. non-relativistic). Finally for incompressible flows we know that $(\nabla \cdot \textbf{v})=0$ corresponding to isopycnal flows.

In the next post, I will discuss some of the consequences of these equations and some elementary theorems involving conservation of magnetic flux and magnetic field line topology.

# Solution to the Hermite Differential Equation

One typically finds the Hermite differential equation in the context of an infinite square well potential and the consequential solution of the Schrödinger equation. However, I will consider this equation is its “raw” mathematical form viz.

$\displaystyle \frac{d^{2}y}{dx^{2}}-2x\frac{dy}{dx}+\lambda y(x) =0. (1)$

First we will consider the more general case, leaving $\lambda$ undefined. The second case will consider in a future post $\lambda = 2n, n\in \mathbb{Z}^{+}$, where $\mathbb{Z}^{+}=\bigg\{x\in\mathbb{Z}|x > 0\bigg\}.$

PART I:

Let us assume the solution has the form

$\displaystyle y(x)=\sum_{j=0}^{\infty}a_{j}x^{j}. (2)$

Now we take the necessary derivatives

$\displaystyle y^{\prime}(x)=\sum_{j=1}^{\infty}ja_{j}x^{j-1}, (3)$

$\displaystyle y^{\prime \prime}(x)=\sum_{j=2}^{\infty} j(j-1)a_{j}x^{j-2}, (4)$

where upon substitution yields the following

$\displaystyle \sum_{j=2}^{\infty}j(j-1)a_{j}x^{j-2}-\sum_{j=1}^{\infty}2ja_{j}x^{j}+\sum_{j=0}^{\infty}\lambda a_{j}x^{j}=0, (5)$

Introducing the dummy variable $m=j-2$ and using this and its variants we arrive at

$\displaystyle \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}x^{j}-\sum_{j=0}^{\infty}2ja_{j}x^{j}+\sum_{j=0}^{\infty}\lambda a_{j}x^{j}=0. (6)$

Bringing this under one summation sign…

$\displaystyle \sum_{j=0}^{\infty}[(j+2)(j+1)a_{j+2}-2ja_{j}+\lambda a_{j}]x^{j}=0. (7)$

Since $\displaystyle \sum_{j=0}^{\infty}x^{j}\neq 0$, we therefore require that

$\displaystyle (j+2)(j+1)a_{j+2}=(2j - \lambda)a_{j}, (8)$

or

$\displaystyle a_{j+2}=\frac{(2j-\lambda)a_{j}}{(j+2)(j+1)}. (9)$

This is our recurrence relation. If we let $j=0,1,2,3,...$ we arrive at two linearly independent solutions (one even and one odd) in terms of the fundamental coefficients $a_{0}$ and $a_{1}$ which may be written as

$\displaystyle y_{even}(x)= a_{0}\bigg\{1+\sum_{j=0}^{j/2}\frac{(-1)^{j}(\lambda -2j)!}{(j+2)!}x^{j}\bigg\}, (10)$

and

$\displaystyle y_{odd}(x)=a_{1}\bigg\{\sum_{j=0}^{(j-1)/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j}\bigg\}. (11)$

Thus, our final solution is the following

$\displaystyle y(x)=y_{even}(x)+y_{odd}(x), (12.1)$

$\displaystyle y(x)=a_{0}\bigg\{1+\sum_{j=0}^{j/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j+2}\bigg\}+a_{1}\bigg\{x+\sum_{j=1}^{(j-1)/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j+2}\bigg\}. (12.2)$