Radiative Transfer
The step-by-step equation of ARTS radiative transfer calculation is described in this section.
The key variables to look out for are spectral_rad and spectral_rad_jac.
A single step
A single step of the radiative transfer calculation in ARTS solvers assume to follow from
where \(I_{1}\) is the spectral radiance after the step, \(I_{0}\) is the spectral radiance before the step, \(J_{0}\) is the spectral radiance source at the step, and \(T_{0}\) is the transmittance through the current step.
To get more of an intuitive understanding of this equation, a “step” could be considered as a single layer of the atmosphere. Imagine that the radiation is coming into the layer at some level is \(I_{0}\), and that the radiation is leaving at another level is \(I_{1}\). The radiation that is leaving is the sum of the radiation that is coming in, and the transmitted incoming radiation differences to the local source of the radiation \(T_{0} \left(I_{0} - J_{0}\right)\), and the radiation that is emitted by the layer itself \(J_{0}\)
The way to compute \(J_{0}\) and \(T_{0}\) is not discussed here. What is important is that they are computed based on the state of the model in the surrounding levels of the model. Noting that the radiation coming into the layer may also have been reflected and can thus depend on all model states, we can write this as
where \(\vec{x}_0\), \(\vec{x}_1\) is the state of the levels above and below the layer, and \(\mathbf{x}\) is the state of the model everywhere. Both \(\vec{x}_0\) and \(\vec{x}_1\) are covered by \(\mathbf{x}\), though they are not necessarily in \(\mathbf{x}\), but can be derived from it, e.g., via interpolation. This will matter when we compute the Jacobian - the Jacobian is on \(\mathbf{x}\) but radiative transfer quantities are on \(\vec{x}_0\) and \(\vec{x}_1\), etc, so there must always be a map back from \(\vec{x}_0\) and \(\vec{x}_1\), etc, to \(\mathbf{x}\).
Note
The notation of ARTS often calls the content of \(\vec{x}_i\) as targets of the Jacobian and a partially flattened version of \(\mathbf{x}\) as the state vector.
The unflattened dimension being the measurement dimension, e.g., frequency or channels.
The concept of targets comes from the fact that the Jacobian is only computed for user-selected variables in \(\mathbf{x}\). These selected variables are thus the users’ “target”.
Multiple steps
We are not always interested in the radiation at level 1, but quite often in the radiation at the end of the atmospheric path. This, and the radiation at intermittent levels, can be computed by simply taking multiple steps of the above equation. Thus, going from level 0 to level N, we get
Both \(J_i\) and \(T_i\) depend on values at levels \(\vec{x}_i\) and \(\vec{x}_{i+1}\), even though these have been omitted for brevity. There are here \(N-1\) steps or layers, and \(N\) levels. Note that these steps are non-commutative, since the step through a layer polarizes the radiation.
Note
\(I_0\) is often called the background radiation, as it is the radiation entering the atmosphere from space, from the ground, or elsewhere. The background, it other words.
If we expand the above expressions and collect them by transmittance, we get
which we can rewrite as a simple sum of terms:
and so on. Here, \(\vec{J}'\) is the source vector, but we use a prime-tick because it is 0-indexed whereas the resulting \(\vec{I}\) is 1-indexed.
In practice, it is often more convenient to just use the step-by-step equations for forward calculations, but the matrix above gives insight on optimizations for the Jacobian of the “measured” \(I_N\).
Partial derivatives
The Jacobian after the last step is important for the retrieval of the state of the atmosphere. It is in this formalism defined as
We now introduce an alternative notation to make the expressions below more compact,
where for sake of keeping the expressions less cluttered, we add \(T_N=1\).
We can summarize the full Jacobian after N steps as
The first term is the background radiation contribution, which is lifted from the rest of the expression because we can then simplify the rest of the expression. Since we know that we can map from \(\vec{x}_i\) to \(\mathbf{x}\), we can write the partial derivatives in terms of \(\vec{x}_i\) instead of \(\mathbf{x}\). The source term then becomes
and for the transmittance, the term becomes
The expression is then mapped back to \(\mathbf{x}\) in the following:
where the last term is 0 for all but \(i=N\) and \(i=N-1\) and where the function \(f\) is defined as the map from \(\vec{x}_i\rightarrow\mathbf{x}\). Often in ARTS, \(f\) is just an inverse interpolation operator.
Potential improvements
Note that here, the indexing is not the same as above as it is experimental notation. This section exists mostly as a reminder that we are using constant source and constant propagation matrix above. If these are allowed to change within a layer, the equations below offers some approach to achieving it.
Note
These are untested and not how ARTS compute things, they are left as a future exercise so that we can implement them later. Just the concept, for instance, of a Dawson function of a matrix is questionable.
Linear source function
This is based on the assumption of a linearly changing source function and constant propagation matrix through a layer. The indexing is the same for the transmittance and incoming background radiation as above - in layers. However, the source function indexing below is now shifted to represent levels - the same indexing as we use for the spectral radiance term.
Linear in source means that we assume \(J = J_0 + (J_1 - J_0) \frac{r}{r_x}\), where \(r\) is the distance through the layer and \(r_x\) is the total thickness of the layer. With constant propagation matrix \(K\), we define \(T_0 = \exp(-K_0 r_x)\) and get
or
This is a classical solution to the radiative transfer equation to improve numerical stability when the propagation matrix \(K\) is large. Going through the motion of extending this to multiple steps as for the constant source case is then just a matter of adding 1 extra term per layer in the dot products above. If we define
we get
and again, we can expand these expressions and collect them by transmittance, but this time with an additional term per layer
Which in shortened matrix form is:
The \(\vec{J}\) term is now without a prime-tick as it is 1-indexed like \(\vec{I}\), though \(J_0\) is still used in the RHS vector.
The partial derivative propagation is then derivable in the same way as above,
And again doing the expansion of the dot products and using \(T_N=1\) for brevity, we get for the source partial derivatives at the end of the path:
Following the same procedure as above, we can rewrite this in terms of \(\vec{x}_i\) instead of \(\mathbf{x}\). Here, a key difference from before is that the source term now only depends on \(\vec{x}_i\) and not on \(\vec{x}_{i+1}\).
The derivative contribution is then given by
Linear propagation matrix and source function
This partly relaxes the assumption of constant propagation matrix within the layer but is otherwise similar to linear in source. The indexing is the same for the transmittance and incoming background radiation as above - in layers. However, the source function and propagation matrix indexing below is now shifted to represent levels - the same indexing as we use for the spectral radiance term.
Linear in source means that we assume \(J = J_0 + (J_1 - J_0) \frac{r}{r_x}\), where \(r\) is the distance through the layer and \(r_x\) is the total thickness of the layer. With a linearly changing propagation matrix \(K = K_0 + (K_1 - K_0) \frac{r}{r_x}\), we define \(T_0 = \exp\left(- \overline{K_{0, 1}} r_x\right)\), and again get
where
The solution to this may be approximated with the Matrix Dawson function \(\mathcal{D}\):
where \(\mathbf{\alpha} = \sqrt{(K_{i+1} - K_i)/(2 r_i)}\) (principal square root), \(\mathbf{u}_0 = \frac{1}{2} \mathbf{\alpha}^{-1} K_i\), and \(\mathbf{u}_1 = \frac{1}{2} \mathbf{\alpha}^{-1} K_{i+1}\). The expression should be valid for imaginary input, as well as real, however we find this very unstable numerically. Thus, we only use this approach when the propagation matrix increases significantly along the path, i.e., \(K_{i+1} \gg K_i\).
The rest of the steps that follow are the same as for the linear source function case above, just replacing \(\Lambda_i\) with this new definition.
Caveats
The approach here uses a matrix Dawson function, which is not what is currently implemented in ARTS. Instead, an element-wise Dawson function is used, which is only correct if the propagation matrix is diagonal in the same basis at both ends of the layer, or potentially if the non-commutative part is very small.
The switch from using linear propagation matrix to constant propagation matrix when the propagation matrix decreases along the path is also a crude approximation that should be improved in the future. Currently, the argument to the square root must be greater that \(10^{-6}\) for the algorithms to not revert to the constant propagation matrix case.
Also be aware that the square root of a matrix is not unique, leading to further potential numerical issues.
Deriving the expressions
This is not an important section for users of ARTS, but is included for completeness to show how the expressions above are derived. It may also be useful if you want to implement other variations of the radiative transfer equation solution.
The matrix case is the same, provided you can treat \(K\) and \(J\) as mostly commuting. This only affect the last case. The path variables are in \(s \in [0,r]\) for the distance through a layer of understood physics, with \(I(0)=I_0\) and \(I(r)=I_1\).
The differential equation in all cases is
The integrating factor is
and then
with
All three cases are just different assumptions for \(K(s)\) and \(J(s)\).
Constant \(K\), constant \(J\).
Take
\[\begin{split}K(s)&=K_0 \\ J(s)&=J_0 \\ T_0 &= e^{-K_0 r}.\end{split}\]Then
\[\begin{split}\mu(s) &= e^{K_0 s} \\ T(r) &=e^{-K_0 r} \\ &=T_0.\end{split}\]The solution:
\[\begin{split}I(r) &= T_0 I_0 + T_0 \int_0^r e^{K_0 s}K_0J_0ds \\ &= T_0 I_0 + T_0 \left(\int_0^r e^{K_0 s}ds\right) K_0 J_0 \\ &= T_0 I_0 + T_0 \left(e^{K_0 r} - 1\right) K_0^{-1} K_0 J_0 \\ &= T_0 I_0 + T_0 \left(e^{K_0 r} - 1\right) J_0 \\ &= T_0 I_0 + \left(1 - T_0\right) J_0.\end{split}\]Writing \(I_1 = I(r)\), we get exactly:
\[I_1 = J_0 + T_0 (I_0 - J_0),\]which is the form used above. Note that no parts of this derivation depends on whether or not the expression is scalar or matrix.
Constant \(K\), linearly varying \(J\). Assume
\[\begin{split}K(s)&=K_0 \\ J(s)&=J_0 + \frac{s}{r}(J_1 - J_0).\end{split}\]Again \(T_0 = e^{-K_0 r}\), and \(\mu(s)=e^{K_0 s}\).
Use the general solution:
\[I(r) = T_0 I_0 + T_0 \int_0^r e^{K_0 s} K_0 J(s)ds.\]Insert \(J(s)\) and split:
\[\begin{split}I(r) &= T_0 I_0 + T_0 \int_0^r e^{K_0 s} K_0 \Bigl[J_0 + \frac{s}{r}(J_1-J_0)\Bigr] ds \\ &= T_0 I_0 + T_0 \left(\int_0^r e^{K_0 s} ds\right) K_0 J_0 + T_0 \left(\int_0^r s e^{K_0 s} ds\right) K_0 \frac{J_1-J_0}{r}.\end{split}\]Compute the two integrals:
\[\begin{split}\int_0^r e^{K_0 s} ds &= \left(e^{K_0 r}-1\right) K_0^{-1} \\ \int_0^r s e^{K_0 s} ds &= \Bigl[se^{K_0 s}K_0^{-1}\Bigr]_0^r - \int_0^r e^{K_0 s} K_0^{-1} ds\\ &= r e^{K_0 r} K_0^{-1} - \left(e^{K_0 r}-1\right)K_0^{-2}.\end{split}\]Insert:
\[I(r) = T_0 I_0 + T_0 \left(e^{K_0 r}-1\right)K_0^{-1} K_0 J_0 + T_0 \Bigl[r e^{K_0 r}K_0^{-1} - \left(e^{K_0 r}-1\right){K_0^{-2}}\Bigr] K_0 \frac{J_1-J_0}{r}.\]Simplify each term, using \(T_0=e^{-K_0 r}\) and cancelling where possible.
First integral term:
\[\begin{split}T_0 \left(e^{K_0 r}-1\right)K_0^{-1} K_0 J_0 &= T_0 \left(e^{K_0 r}-1\right) J_0 \\ &= \left( 1-T_0 \right) J_0.\end{split}\]Second integral term:
\[\begin{split}T_0 \Bigl[r e^{K_0 r}K_0^{-1} - \left(e^{K_0 r}-1\right)K_0^{-2}\Bigr] K_0 \frac{J_1-J_0}{r} &= \left[ T_0 r e^{K_0 r} + T_0 \left(e^{K_0 r}-1\right)K_0^{-1} \right] \frac{J_1-J_0}{r} \\ &= \left[ r - \left(1-T_0\right)K_0^{-1} \right] \frac{J_1-J_0}{r}.\end{split}\]So
\[I(r) = T_0 I_0 + \left( 1-T_0 \right) J_0 +\left[r - \left( 1-T_0 \right)K_0^{-1}\right] \frac{J_1-J_0}{r}.\]Rearrange as
\[I_1 = J_1 + T_0 (I_0 - J_0) + \underbrace{\frac{1}{r} K_0^{-1}(1-T_0)}_{\displaystyle \Lambda_0}(J_0 - J_1),\]which gives the form used above
\[\begin{split}I_1 &= J_1 + T_0 (I_0 - J_0) + \Lambda_0 (J_0 - J_1),\\ \Lambda_0 &= \frac{1}{r} K_0^{-1} \left( 1-T_0 \right) \\ &=-\log T_0 \left( 1-T_0 \right).\end{split}\]Again, no assumptions were made about scalar vs. matrix. The only scary step of matrix notation non-commutativity is that you have to be aware that you can write, e.g., \(A^3=A A^2=A^2 A\), which means with \(\exp(A) = \sum_{n=0}^\infty \frac{A^n}{n!}\), you can do \(\exp(A) A = \left(\sum_{n=0}^\infty \frac{A^{n}}{n!}\right) A = \sum_{n=0}^\infty \frac{A^{n+1}}{n!} = A \sum_{n=0}^\infty \frac{A^{n}}{n!} = A \exp(A)\).
Implementation note
It is very important to implement the expression for \(\Lambda_0\) in a numerically stable way. The expression above is not stable for small \(K_0 r\) (i.e., large \(T_0\)) as written. The key instability stems from the subtraction of two nearly equal terms in \(1 - T_0\). The IEEE floating point standard provides a function
expm1(x), which computes \(e^x - 1\) in a numerically stable way for small \(x\).Likewise, the matrix expansion of \(1 - T_0\) might be unstable.
So we use a special solution implementing our own version of the reduced Cayley-Hamilton theorem to compute \(\Lambda_0\) in a numerically stable way for matrices that conform to the propagation matrix notation in ARTS. This makes use of the inversion of \(K\) to remove components from the expansion of the matrix exponential that would otherwise cause numerical instability.
Linear \(K(s)\) and linear \(J(s)\).
Now set both \(K\) and \(J\) to be linear in \(s\) across the layer:
\[\begin{split}K(s) &= K_0 + \frac{s}{r}(K_1-K_0),\\ J(s) &= J_0 + \frac{s}{r}(J_1-J_0).\end{split}\]Let
\[\begin{split}\alpha &= \frac{K_1-K_0}{r},\\ \beta &= \frac{J_1-J_0}{r},\\ K(s) &= K_0 + \alpha s,\\ J(s) &= J_0 + \beta s.\\\end{split}\]3.1. Integrating factor and transmittance.
The integrating factor and transmittance become
\[\begin{split}\mu(s) &= \exp\Bigl(\int_0^s K(u)du\Bigr)\\ &= \exp\Bigl(\int_0^s (K_0 + \alpha u)du\Bigr) \\ &= \exp\bigl(K_0 s + \tfrac12 \alpha s^2\bigr)\\ T(r) &= \mu(r)^{-1}\\ &= \exp\bigl(-K_0 r - \tfrac12 \alpha r^2\bigr)\\ &= T_0.\end{split}\]That’s the same \(T_0\) as always, assuming that we only evaluate the expression at \(0\) and \(r\), and that the layer value of \(K\) above is the average of the endpoints (\(K_0\) and \(K_1\)).
3.2. General solution. The general solution is still
\[I(r) = T_0 I_0 + T_0 \int_0^r \mu(s) K(s) J(s)\,ds.\]Insert the forms:
\[\begin{split}\mu(s) &= e^{K_0 s + \frac12 \alpha s^2},\\ K(s) &= K_0 + \alpha s,\\ J(s) &= J_0 + \beta s\\ &= J_1 + (J_0 - J_1)\Bigl(1 - \frac{s}{r}\Bigr),\end{split}\]where in the last step we used that \(J(r)=J_1\).
Then
\[I(r) = T_0 I_0 + T_0 \int_0^r \mu(s) K(s) \Bigl[J_1 + (J_0-J_1)\Bigl(1 - \frac{s}{r}\Bigr)\Bigr]\,ds.\]Split the integral into the part proportional to \(J_1\) and the part proportional to \((J_0-J_1)\):
\[I(r) = T_0 I_0 + T_0 \left(\int_0^r \mu(s) K(s)\,ds\right) J_1 + T_0 \left( \int_0^r \mu(s) K(s) \Bigl(1 - \frac{s}{r}\Bigr)\,ds \right) (J_0-J_1).\]The first integral is the same as for a constant source \(J(s)\equiv J_1\). For any \(K(s)\), that problem has the known solution
\[I(r) = J_1 + T_0 (I_0 - J_1),\]so comparing with the integral form gives
\[\begin{split}T_0 \left(\int_0^r \mu(s) K(s)\,ds\right) J_1 &= (1 - T_0) J_1 \quad\Rightarrow\quad\\ \int_0^r \mu(s) K(s)\,ds &= T_0^{-1} - 1.\end{split}\]Insert this back:
\[I(r) = T_0 I_0 + (1-T_0) J_1 + T_0 \left( \int_0^r \mu(s) K(s) \Bigl(1 - \frac{s}{r}\Bigr)\,ds \right) (J_0-J_1).\]Rearranging the first two terms:
\[T_0 I_0 + (1-T_0) J_1 = J_1 + T_0 (I_0 - J_1) = J_1 + T_0 (I_0 - J_0) + T_0 (J_0 - J_1),\]so
\[I(r) = J_1 + T_0 (I_0 - J_0) + \Biggl[ T_0 + T_0 \int_0^r \mu(s) K(s)\Bigl(1 - \frac{s}{r}\Bigr)\,ds \Biggr] (J_0-J_1).\]This shows that the solution has the affine form
\[I_1 = J_1 + T_0 (I_0 - J_0) + \Lambda_0 (J_0 - J_1),\]with
\[\Lambda_0 = T_0 \Biggl[ 1 + \int_0^r \mu(s) K(s)\Bigl(1 - \frac{s}{r}\Bigr)\,ds \Biggr].\]To obtain a more compact expression for \(\Lambda_0\), use integration by parts on the remaining integral. Note that
\[\bigl[\mu(s)\bigl(1 - \tfrac{s}{r}\bigr)\bigr]' = \mu'(s)\Bigl(1 - \frac{s}{r}\Bigr) - \frac{1}{r}\mu(s) = \mu(s)K(s)\Bigl(1 - \frac{s}{r}\Bigr) - \frac{1}{r}\mu(s),\]hence
\[\mu(s)K(s)\Bigl(1 - \frac{s}{r}\Bigr) = \bigl[\mu(s)\bigl(1 - \tfrac{s}{r}\bigr)\bigr]' + \frac{1}{r}\mu(s).\]Integrating this from \(0\) to \(r\) gives
\[\begin{split}\int_0^r \mu(s)K(s)\Bigl(1 - \frac{s}{r}\Bigr) ds &= \Bigl[\mu(s)\Bigl(1 - \frac{s}{r}\Bigr)\Bigr]_0^r + \frac{1}{r}\int_0^r \mu(s)\,ds \\ &= -1 + \frac{1}{r}\int_0^r \mu(s)\,ds,\end{split}\]because \(\mu(0)=1\) and \(1-r/r=0\). Inserting this into the expression for \(\Lambda_0\) gives
\[\Lambda_0 = T_0\Biggl[1 - 1 + \frac{1}{r}\int_0^r \mu(s)\,ds\Biggr] = \frac{1}{r} T_0 \int_0^r \mu(s)\,ds.\]Finally, for the linear \(K(s)\) assumed above,
\[\mu(s) = \exp\Bigl(\int_0^s K(u)du\Bigr) = \exp\Bigl(K_0 s + \frac{K_1-K_0}{2r} s^2\Bigr),\]so we obtain exactly
\[\Lambda_0 = \frac{1}{r} T_0 \int_0^r \exp\Bigl(K_0 s + \frac{K_1-K_0}{2r} s^2\Bigr)\,ds,\]which is the expression used above (with index \(i\)):
\[\Lambda_i = \frac{1}{r_i} T_i \int_0^{r_i} \exp\left(K_i s + \frac{K_{i+1}-K_i}{2 r_i} s^2\right) ds.\]In other words, \(\Lambda_i\) is precisely the coefficient multiplying \((J_i - J_{i+1})\) that comes from solving the ODE with both \(K\) and \(J\) linear in \(s\).
3.3. Dawson function connection. The starting point is
\[\Lambda_0 = \frac{1}{r} T_0 \int_0^{r} \exp\Bigl(K_0 s + \tfrac{K_1-K_0}{2r} s^2\Bigr)\,ds,\]which is valid both for scalar and matrix-valued \(K_0, K_1\). In the remainder of this subsection we first treat the scalar case (or, equivalently, the case where \(K_0\) and \(K_1\) commute and can be simultaneously diagonalized), because only then can we complete the square and express the integral in terms of the (scalar) Dawson function.
Using \(\alpha = (K_1-K_0)/r\) as above, the integral
\[\int_0^{r} \exp\Bigl(K_0 s + \tfrac12 \,\alpha s^2\Bigr)\,ds\]is a Gaussian-type integral with a linear term in the exponent. Completing the square gives
\[\begin{split}K_0 s + \tfrac12 \,\alpha s^2 &= \tfrac12 \,\alpha\Bigl(s^2 + 2\frac{K_0}{\alpha} s\Bigr) \\ &= \tfrac12 \,\alpha\Bigl\{\bigl(s + \tfrac{K_0}{\alpha}\bigr)^2 - \bigl(\tfrac{K_0}{\alpha}\bigr)^2\Bigr\}.\end{split}\]Thus
\[\int_0^{r} \exp\Bigl(K_0 s + \tfrac12 \,\alpha s^2\Bigr)ds = e^{-\frac{K_0^2}{2\alpha}} \int_0^{r} \exp\Bigl(\tfrac12 \,\alpha\bigl(s + \tfrac{K_0}{\alpha}\bigr)^2\Bigr)ds.\]With the substitution
\[t = \sqrt{\tfrac{\alpha}{2}}\Bigl(s + \tfrac{K_0}{\alpha}\Bigr) \quad\Rightarrow\quad ds = \sqrt{\tfrac{2}{\alpha}}\,dt,\]the integral becomes
\[\sqrt{\tfrac{2}{\alpha}}\, e^{-\frac{K_0^2}{2\alpha}} \int_{t_0}^{t_1} e^{t^2}\,dt,\]which can be expressed in terms of the Dawson function
\[D(z) = e^{-z^2}\int_0^z e^{u^2}\,du.\]Carrying this through in the scalar case gives the representation
\[\Lambda_0 = \frac{1}{r}\,\sigma^{-1}\bigl(D(u_1) - T_0\,D(u_0)\bigr),\]where
\[\begin{split}\sigma &= \sqrt{\frac{K_1-K_0}{2r}}, \\ u_0 &= \tfrac12 \sigma^{-1} K_0, \\ u_1 &= \tfrac12 \sigma^{-1} K_1.\end{split}\]Extension to matrices. For matrix-valued \(K_0,K_1\), the exact expression for \(\Lambda_0\) is still
\[\Lambda_0 = \frac{1}{r}\,T_0 \int_0^{r} \exp\Bigl(K_0 s + \tfrac{K_1-K_0}{2r} s^2\Bigr)\,ds,\]but the “complete the square” steps above only go through without change if all matrices involved commute (for example, when \(K_0\) and \(K_1\) are simultaneously diagonalizable). Under this commuting assumption we may treat the matrices as if they were scalars and define matrix analogues
\[\begin{split}\boldsymbol{\sigma} = \sqrt{\frac{K_{i+1} - K_i}{2 r_i}} \quad\text{(principal matrix square root)},\\ \mathbf{u}_0 = \tfrac12 \boldsymbol{\sigma}^{-1} K_i, \qquad \mathbf{u}_1 = \tfrac12 \boldsymbol{\sigma}^{-1} K_{i+1},\end{split}\]and a matrix Dawson function \(\mathcal{D}\) by functional calculus applied to these commuting matrices. This leads to the formal matrix expression
\[\mathbf{\Lambda}_i = \frac{1}{r_i}\,\boldsymbol{\sigma}^{-1} \Bigl(\mathcal{D}(\mathbf{u}_1) - T_i\,\mathcal{D}(\mathbf{u}_0)\Bigr),\]where all factors on the right-hand side commute because they are analytic functions of \(K_i\) and \(K_{i+1}\).
In practice, ARTS does not evaluate a full matrix Dawson function. Instead, it applies the scalar Dawson function element-wise in a fixed basis, which is only strictly valid if the propagation matrix is diagonal (or nearly diagonal) in that basis at both ends of the layer. This is why the documentation refers to the Dawson-based expression as an approximation in the general polarized case.
Implementation note
This method requires computing the matrix square root of \((K_{i+1}-K_i)/(2 r_i)\). It also requires using a matrix Dawson function. Neither of these operations are numerically stable. It is therefore not recommended to use this method unless you have a good reason to do so. It is mainly included here for completeness, and as an indicator for future improvements of the radiative transfer solver in ARTS.