json-fortran 1.0.0
I just tagged the 1.0.0 release of json-fortran. You're welcome, interwebs.
I just tagged the 1.0.0 release of json-fortran. You're welcome, interwebs.
Rodrigues' rotation formula can be used to rotate a vector \(\mathbf{v}\) a specified angle \(\theta\) about a specified rotation axis \(\mathbf{k}\):
A Fortran routine to accomplish this (taken from the vector module in the Fortran Astrodynamics Toolkit) is:
subroutine rodrigues_rotation(v,k,theta,vrot)
implicit none
real(wp),dimension(3),intent(in) :: v !vector to rotate
real(wp),dimension(3),intent(in) :: k !rotation axis
real(wp),intent(in) :: theta !rotation angle [rad]
real(wp),dimension(3),intent(out) :: vrot !result
real(wp),dimension(3) :: khat
real(wp) :: ct,st
ct = cos(theta)
st = sin(theta)
khat = unit(k)
vrot = v*ct + cross(khat,v)*st + &
khat*dot_product(khat,v)*(one-ct)
end subroutine rodrigues_rotation
This operation can also be converted into a rotation matrix, using the equation:
Where the matrix \([\hat{\mathbf{k}}\times]\) is the skew-symmetric cross-product matrix and \(\mathbf{v}_\mathrm{rot} = \mathbf{R} \mathbf{v}\).
GCC 4.9.1 has been released. The big news for Fortran users is that OpenMP 4.0 is now supported in gfortran.
I'm starting a new project on GitHub: the Fortran Astrodynamics Toolkit. Hardly anyone is developing open source orbital mechanics software for modern Fortran, so the time has come. Most of the code from this blog will eventually find its way there. My goal is to include modern Fortran implementations of all the standard orbital mechanics algorithms such as:
Then we'll see where it goes from there. The code will be released under a permissive BSD style license.
Here is a simple Fortran subroutine to return only the unique values in a vector (inspired by Matlab's unique function). Note, this implementation is for integer arrays, but could easily be modified for any type. This code is not particularly optimized, so there may be a more efficient way to do it. It does demonstrate Fortran array transformational functions such as pack and count. Note that to duplicate the Matlab function, the output array must also be sorted.
subroutine unique(vec,vec_unique)
! Return only the unique values from vec.
implicit none
integer,dimension(:),intent(in) :: vec
integer,dimension(:),allocatable,intent(out) :: vec_unique
integer :: i,num
logical,dimension(size(vec)) :: mask
mask = .false.
do i=1,size(vec)
!count the number of occurrences of this element:
num = count( vec(i)==vec )
if (num==1) then
!there is only one, flag it:
mask(i) = .true.
else
!flag this value only if it hasn't already been flagged:
if (.not. any(vec(i)==vec .and. mask) ) mask(i) = .true.
end if
end do
!return only flagged elements:
allocate( vec_unique(count(mask)) )
vec_unique = pack( vec, mask )
!if you also need it sorted, then do so.
! For example, with slatec routine:
!call ISORT (vec_unique, [0], size(vec_unique), 1)
end subroutine unique
I agree with everything said in this article: My Corner of the World: C++ vs Fortran. I especially like the bit contrasting someone trying to learn C++ for the first time in order to do some basic linear algebra (like a matrix multiplication). Of course, in Fortran, this is a built-in part of the language:
matrix_c = matmul(matrix_a, matrix_b)
Whereas, for C++:
The bare minimum for doing the same exercise in C++ is that you have to get your hands on a library that does what you want. So you have to know about the existence of Eigen or another library. Then you have to figure out how to call it. Then you have to figure out how to link to an external library. And the documentation is probably not that helpful. There are friendly C++ users for sure. It's just that you won't have a clue what they're telling you to do. They'll be telling you to set up a makefile. They'll be recommending that you look at Boost and talking about how great it is and you'll be all WTF.
And before you use that C++ library, are you really sure you won't end up with a memory leak? Do you even know what a memory leak is?
In the last few years, a number of excellent books have been published about modern Fortran:
In addition, other more general-purpose references on modern programming concepts (such as object-oriented programming and high performance computing) tailored to the Fortran programmer are:
All of these are highly recommended, especially if you want to learn modern Fortran for the first time. Under no circumstances should you purchase any Fortran book with the numbers "77", "90", or "95" in the title.
Updated July 7, 2014
Poorly commented sourcecode is one of my biggest pet peeves. Not only should the comments explain what the routine does, but where the algorithm came from and what its limitations are. Consider ISORT, a non-recursive quicksort routine from SLATEC, written in 1976. In it, we encounter the following code:
R = 0.375E0
...
IF (R .LE. 0.5898437E0) THEN
R = R+3.90625E-2
ELSE
R = R-0.21875E0
ENDIF
What are these magic numbers? An internet search for "0.5898437" yields dozens of hits, many of them different versions of this algorithm translated into various other programming languages (including C, C++, and free-format Fortran). Note: the algorithm here is the same one, but also prefaces this code with the following helpful comment:
! And now...Just a little black magic...
Frequently, old code originally written in single precision can be improved on modern computers by updating to full-precision constants (replacing 3.14159
with a parameter computed at compile time as acos(-1.0_wp)
is a classic example). Is this the case here? Sometimes WolframAlpha is useful in these circumstances. It tells me that a possible closed form of 0.5898437 is: \(\frac{57}{100\pi}+\frac{13\pi}{100} \approx 0.58984368009\). Hmmmmm... The reference given in the header is no help [1], it doesn't contain this bit of code at all.
It turns out, what this code is doing is generating a pseudorandom number to use as the pivot element in the quicksort algorithm. The code produces the following repeating sequence of values for R:
0.375
0.4140625
0.453125
0.4921875
0.53125
0.5703125
0.609375
0.390625
0.4296875
0.46875
0.5078125
0.546875
0.5859375
0.625
0.40625
0.4453125
0.484375
0.5234375
0.5625
0.6015625
0.3828125
0.421875
0.4609375
0.5
0.5390625
0.578125
0.6171875
0.3984375
0.4375
0.4765625
0.515625
0.5546875
0.59375
This places the pivot point near the middle of the set. The source of this scheme is from Reference [2], and is also mentioned in Reference [3]. According to [2]:
These four decimal constants, which are respectively 48/128, 75.5/128, 28/128, and 5/128, are rather arbitrary.
The other magic numbers in this routine are the dimensions of these variables:
INTEGER IL(21), IU(21)
These are workspace arrays used by the subroutine, since it does not employ recursion. But, since they have a fixed size, there is a limit to the size of the input array this routine can sort. What is that limit? You would not know from the documentation in this code. You have to go back to the original reference [1] (where, in fact, these arrays only had 16 elements). There, it explains that the arrays IL(K)
and IU(K)
permit sorting up to \(2^{k+1}-1\) elements (131,071 elements for the k=16 case). With k=21, that means the ISORT routine will work for up to 4,194,303 elements. So, keep that in mind if you are using this routine.
There are many other implementations of the quicksort algorithm (which was declared one of the top 10 algorithms of the 20th century). In the LAPACK quicksort routine DLASRT, k=32 and the "median of three" method is used to select the pivot. The quicksort method in R is the same algorithm as ISORT, except that k=40 (this code also has the advantage of being properly documented, unlike the other two).
The "direct" geodetic problem is: given the latitude and longitude of one point and the azimuth and distance to a second point, determine the latitude and longitude of that second point. The solution can be obtained using the algorithm by Polish American geodesist Thaddeus Vincenty [1]. A modern Fortran implementation is given below:
subroutine direct(a,f,glat1,glon1,glat2,glon2,faz,baz,s)
use, intrinsic :: iso_fortran_env, wp => real64
implicit none
real(wp),intent(in) :: a !semimajor axis of ellipsoid [m]
real(wp),intent(in) :: f !flattening of ellipsoid [-]
real(wp),intent(in) :: glat1 !latitude of 1 [rad]
real(wp),intent(in) :: glon1 !longitude of 1 [rad]
real(wp),intent(in) :: faz !forward azimuth 1->2 [rad]
real(wp),intent(in) :: s !distance from 1->2 [m]
real(wp),intent(out) :: glat2 !latitude of 2 [rad]
real(wp),intent(out) :: glon2 !longitude of 2 [rad]
real(wp),intent(out) :: baz !back azimuth 2->1 [rad]
real(wp) :: r,tu,sf,cf,cu,su,sa,csa,c2a,x,c,d,y,sy,cy,cz,e
real(wp),parameter :: pi = acos(-1.0_wp)
real(wp),parameter :: eps = 0.5e-13_wp
r = 1.0_wp-f
tu = r*sin(glat1)/cos(glat1)
sf = sin(faz)
cf = cos(faz)
baz = 0.0_wp
if (cf/=0.0_wp) baz = atan2(tu,cf)*2.0_wp
cu = 1.0_wp/sqrt(tu*tu+1.0_wp)
su = tu*cu
sa = cu*sf
c2a = -sa*sa+1.0_wp
x = sqrt((1.0_wp/r/r-1.0_wp)*c2a+1.0_wp)+1.0_wp
x = (x-2.0_wp)/x
c = 1.0_wp-x
c = (x*x/4.0_wp+1.0_wp)/c
d = (0.375_wp*x*x-1.0_wp)*x
tu = s/r/a/c
y = tu
do
sy = sin(y)
cy = cos(y)
cz = cos(baz+y)
e = cz*cz*2.0_wp-1.0_wp
c = y
x = e*cy
y = e+e-1.0_wp
y = (((sy*sy*4.0_wp-3.0_wp)*y*cz*d/6.0_wp+x)*d/4.0_wp-cz)*sy*d+tu
if (abs(y-c)<=eps) exit
end do
baz = cu*cy*cf-su*sy
c = r*sqrt(sa*sa+baz*baz)
d = su*cy+cu*sy*cf
glat2 = atan2(d,c)
c = cu*cy-su*sy*cf
x = atan2(sy*sf,c)
c = ((-3.0_wp*c2a+4.0_wp)*f+4.0_wp)*c2a*f/16.0_wp
d = ((e*cy*c+cz)*sy*c+y)*sa
glon2 = glon1+x-(1.0_wp-c)*d*f
baz = atan2(sa,baz)+pi
end subroutine direct
Runge-Kutta Nyström methods can be used to integrate second-order ordinary differential equations of the form:
There are also versions for the more general form:
For second-order systems, Nyström methods are generally more efficient than the more familiar Runge-Kutta methods. The following algorithm is a 4th order Nyström method from Reference [1], encapsulated into a simple Fortran module. It integrates equations of type (b). Several others are also given in the same paper. A high-accuracy variable step size Nyström method for systems of type (a) can also be found in Reference [2].
module nystrom
use, intrinsic :: iso_fortran_env, wp=>real64
implicit none
private
! derivative function prototype
abstract interface
function deriv(n,t,x,xd) result(xdd)
import :: wp
integer,intent(in) :: n
real(wp),intent(in) :: t
real(wp),dimension(n),intent(in) :: x
real(wp),dimension(n),intent(in) :: xd
real(wp),dimension(n) :: xdd
end function deriv
end interface
public :: nystrom_lear_44
contains
subroutine nystrom_lear_44(n,df,t0,x0,xd0,dt,xf,xdf)
! 4th order Nystrom integration routine
! Take a step from t0 to t0+dt (x0,xd0 -> xf,xdf)
implicit none
integer,intent(in) :: n !number of equations
procedure(deriv) :: df !derivative routine
real(wp),intent(in) :: t0 !initial time
real(wp),dimension(n),intent(in) :: x0,xd0 !initial state
real(wp),intent(in) :: dt !time step
real(wp),dimension(n),intent(out) :: xf,xdf !final state
real(wp) :: t
real(wp),dimension(n) :: x,xd,k1,k2,k3,k4
real(wp),parameter :: s5 = sqrt(5.0_wp)
real(wp),parameter :: delta2 = (5.0_wp-s5)/10.0_wp
real(wp),parameter :: delta3 = (5.0_wp+s5)/10.0_wp
real(wp),parameter :: delta4 = 1.0_wp
real(wp),parameter :: a1 = (3.0_wp-s5)/20.0_wp
real(wp),parameter :: b1 = 0.0_wp
real(wp),parameter :: b2 = (3.0_wp+s5)/20.0_wp
real(wp),parameter :: c1 = (-1.0_wp+s5)/4.0_wp
real(wp),parameter :: c2 = 0.0_wp
real(wp),parameter :: c3 = (3.0_wp-s5)/4.0_wp
real(wp),parameter :: alpha1 = 1.0_wp/12.0_wp
real(wp),parameter :: alpha2 = (5.0_wp+s5)/24.0_wp
real(wp),parameter :: alpha3 = (5.0_wp-s5)/24.0_wp
real(wp),parameter :: alpha4 = 0.0_wp
real(wp),parameter :: ad1 = (5.0_wp-s5)/10.0_wp
real(wp),parameter :: bd1 = -(5.0_wp+3.0_wp*s5)/20.0_wp
real(wp),parameter :: bd2 = (3.0_wp+s5)/4.0_wp
real(wp),parameter :: cd1 = (-1.0_wp+5.0_wp*s5)/4.0_wp
real(wp),parameter :: cd2 = -(5.0_wp+3.0_wp*s5)/4.0_wp
real(wp),parameter :: cd3 = (5.0_wp-s5)/2.0_wp
real(wp),parameter :: beta1 = 1.0_wp/12.0_wp
real(wp),parameter :: beta2 = 5.0_wp/12.0_wp
real(wp),parameter :: beta3 = 5.0_wp/12.0_wp
real(wp),parameter :: beta4 = 1.0_wp/12.0_wp
if (dt==0.0_wp) then
xf = x0
xdf = xd0
else
t = t0
x = x0
xd = xd0
k1 = dt*df(n,t,x,xd)
t = t0+delta2*dt
x = x0+delta2*dt*xd0+a1*dt*k1
xd = xd0+ad1*k1
k2 = dt*df(n,t,x,xd)
t = t0+delta3*dt
x = x0+delta3*dt*xd0+dt*(b1*k1+b2*k2)
xd = xd0+bd1*k1+bd2*k2
k3 = dt*df(n,t,x,xd)
t = t0+dt
x = x0+dt*xd0+dt*(c1*k1+c3*k3)
xd = xd0+cd1*k1+cd2*k2+cd3*k3
k4 = dt*df(n,t,x,xd)
xf = x0+dt*xd0+dt*(alpha1*k1+alpha2*k2+alpha3*k3)
xdf = xd0+beta1*k1+beta2*k2+beta3*k3+beta4*k4
end if
end subroutine nystrom_lear_44
end module nystrom