Numerical Differentiation

dfdx_smallI present the initial release of a new modern Fortran library for computing Jacobian matrices using numerical differentiation. It is called NumDiff and is available on GitHub. The Jacobian is the matrix of partial derivatives of a set of \(m\) functions with respect to \(n\) variables:

\mathbf{J}(\mathbf{x}) = \frac{d \mathbf{f}}{d \mathbf{x}} = \left[
\begin{array}{ c c c }
\frac{\partial f_1}{\partial x_1} & \cdots & \frac{\partial f_1}{\partial x_n} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_m}{\partial x_1} & \cdots & \frac{\partial f_m}{\partial x_n} \\

Typically, each variable \(x_i\) is perturbed by a value \(h_i\) using forward, backward, or central differences to compute the Jacobian one column at a time (\(\partial \mathbf{f} / \partial x_i\)). Higher-order methods are also possible [1]. The following finite difference formulas are currently available in the library:

  • Two points:
    • \( (f(x+h)-f(x)) / h \)
    • \( (f(x)-f(x-h)) / h \)
  • Three points:
    • \( (f(x+h)-f(x-h)) / (2h) \)
    • \( (-3f(x)+4f(x+h)-f(x+2h)) / (2h) \)
    • \( (f(x-2h)-4f(x-h)+3f(x)) / (2h) \)
  • Four points:
    • \( (-2f(x-h)-3f(x)+6f(x+h)-f(x+2h)) / (6h) \)
    • \( (f(x-2h)-6f(x-h)+3f(x)+2f(x+h)) / (6h) \)
    • \( (-11f(x)+18f(x+h)-9f(x+2h)+2f(x+3h)) / (6h) \)
    • \( (-2f(x-3h)+9f(x-2h)-18f(x-h)+11f(x)) / (6h) \)
  • Five points:
    • \( (f(x-2h)-8f(x-h)+8f(x+h)-f(x+2h)) / (12h) \)
    • \( (-3f(x-h)-10f(x)+18f(x+h)-6f(x+2h)+f(x+3h)) / (12h) \)
    • \( (-f(x-3h)+6f(x-2h)-18f(x-h)+10f(x)+3f(x+h)) / (12h) \)
    • \( (-25f(x)+48f(x+h)-36f(x+2h)+16f(x+3h)-3f(x+4h)) / (12h) \)
    • \( (3f(x-4h)-16f(x-3h)+36f(x-2h)-48f(x-h)+25f(x)) / (12h) \)

The basic features of NumDiff are listed here:

  • A variety of finite difference methods are available (and it is easy to add new ones).
  • If you want, you can specify a different finite difference method to use to compute each column of the Jacobian.
  • You can also specify the number of points of the methods, and a suitable one of that order will be selected on-the-fly so as not to violate the variable bounds.
  • I also included an alternate method using Neville’s process which computes each element of the Jacobian individually [2]. It takes a very large number of function evaluations but produces the most accurate answer.
  • A hash table based caching system is implemented to cache function evaluations to avoid unnecessary function calls if they are not necessary.
  • It supports very large sparse systems by compactly storing and computing the Jacobian matrix using the sparsity pattern. Optimization codes such as SNOPT and Ipopt can use this form.
  • It can also return the dense (\(m \times n\)) matrix representation of the Jacobian if that sort of thing is your bag (for example, the older SLSQP requires this form).
  • It can also return the \(J*v\) product, where \(J\) is the full (\(m \times n\)) Jacobian matrix and v is an (\(n \times 1\)) input vector. This is used for Krylov type algorithms.
  • The sparsity pattern can be supplied by the user or computed by the library.
  • The sparsity pattern can also be partitioned so as compute multiple columns of the Jacobian simultaneously so as to reduce the number of function calls [3].
  • It is written in object-oriented Fortran 2008. All user interaction is through a NumDiff class.
  • It is open source with a BSD-3 license.

I haven’t yet really tried to fine-tune the code, so I make no claim that it is the most optimal it could be. I’m using various modern Fortran vector and matrix intrinsic routines such as PACK, COUNT, etc. Likely there is room for efficiency improvements. I’d also like to add some parallelization, either using OpenMP or Coarray Fortran. OpenMP seems to have some issues with some modern Fortran constructs so that might be tricky. I keep meaning to do something real with coarrays, so this could be my chance.

So, there you go internet. If anybody else finds it useful, let me know.


  1. G. Engeln-Müllges, F. Uhlig, Numerical Algorithms with Fortran, Springer-Verlag Berlin Heidelberg, 1996.
  2. J. Oliver, “An algorithm for numerical differentiation of a function of one real variable“, Journal of Computational and Applied Mathematics 6 (2) (1980) 145–160. [A Fortran 77 implementation of this algorithm by David Kahaner was formerly available from NIST, but the link seems to be dead. My modern Fortran version is available here.]
  3. T. F. Coleman, B. S. Garbow, J. J. Moré, “Algorithm 618: FORTRAN subroutines for estimating sparse Jacobian Matrices“, ACM Transactions on Mathematical Software (TOMS), Volume 10 Issue 3, Sept. 1984.
Posted in Algorithms Tagged with: , , , ,

GOTO Still Considered Harmful

For a number of years I have been familiar with the observation that the quality of programmers is a decreasing function of the density of go to statements in the programs they produce. — Edsger W. Dijkstra


One of the classics of computer science is Edsger Dijkstra’s “Go To Statement Considered Harmful“, written in 1968. This missive argued that the GOTO statement (present in several languages at the time, including Fortran) was too primitive for high-level programming languages, and should be avoided.

Most people now agree with this, although some even today think that GOTOs are fine under some circumstances. They are present in C and C++, and are apparently used extensively in the Linux kernel. A recent study of C code in GitHub concluded that GOTO use is not that bad and is mainly used for error-handling and cleanup tasks, for situations where there are no better alternatives in C. However, C being a low-level language, we should not expect much from it. For modern Fortran users however, there are better ways to do these things. For example, unlike in C, breaking out of multiple loops is possible without GOTOs by using named DO loops with an EXIT statement like so:

a_loop : do a=1,n
    b_loop: do b=1,m
        !do some stuff ...
        if (done) exit a_loop ! break out of the outer loop
    end do b_loop
end do a_loop

In old-school Fortran (or C) this would be something like this:

    do a=1,n
        do b=1,m
            ! do some stuff ...
            if (done) goto 10 ! break out of the outer loop
            ! ...
        end do
    end do
10  continue

Of course, these two simple examples are both functionally equivalent, but the first one uses a much more structured approach. It’s also a lot easier to follow what is going on. Once a line number is declared, there’s nothing to stop a GOTO statement from anywhere in the code from jumping there (see spaghetti code). In my view, it’s best to avoid this possibility entirely. In modern Fortran, DO loops (with CYCLE and EXIT), SELECT CASE statements, and other language constructs have obviated the need for GOTO for quite some time. Fortran 2008 added the BLOCK construct which was probably the final nail in the GOTO coffin, since it allows for the most common use cases (exception handing and cleanup) to be easily done without GOTOs. For example, in this code snippet, the main algorithm is contained within a BLOCK, and the exception handling code is outside:

main: block
    ! do some stuff ...
    if (error) exit main ! if there is a problem anywhere in this block,
                          ! then exit to the exception handling code.
    ! ...
    return   ! if everything is OK, then return
end block main

! exception handling code here

The cleanup case is similar (which is code that is always called):

main: block
    ! do some stuff ...
    if (need_to_cleanup) exit main ! for cleanup
    ! ...
end block main

! cleanup code here

I don’t believe any of the new programming languages that have cropped up in the past couple of decades has included a GOTO statement (although someone did create a goto statement for Python as an April Fool’s joke in 2004). Of course, the presence of GOTO’s doesn’t mean the programmer is bad or that the code isn’t going to work well. There is a ton of legacy Fortran 77 code out there that is rock solid, but unfortunately littered with GOTOs. An example is the DIVA integrator from the JPL MATH77 library (the screenshot above is from this code). First written in 1987, it is code of the highest quality, and has been used for decades in many spacecraft applications. However, it is also spaghetti code of the highest order, and seems like it would be unimaginably hard to maintain or modify at this point.

Source: XKCD

Source: XKCD

See also

Posted in Programming Tagged with: ,

Fortran at 60

fortran_acs_coverToday marks the 60th anniversary of the release of the original Fortran Programmer’s Reference Manual. Fortran was the world’s first high-level computer programming language, was developed beginning in 1953 at IBM by a team lead by John Backus. The first compiler was released in 1957. According to the manual, one of the main features was:

Object programs produced by FORTRAN will be nearly as efficient as those written by good programmers.

The first Fortran compiler (which was also the first optimizing compiler) was named one of the top 10 algorithms of the 20th century:

The creation of Fortran may rank as the single most important event in the history of computer programming: Finally, scientists (and others) could tell the computer what they wanted it to do, without having to descend into the netherworld of machine code. Although modest by modern compiler standards—Fortran I consisted of a mere 23,500 assembly-language instructions—the early compiler was nonetheless capable of surprisingly sophisticated computations. As Backus himself recalls in a recent history of Fortran I, II, and III, published in 1998 in the IEEE Annals of the History of Computing, the compiler “produced code of such efficiency that its output would startle the programmers who studied it.”

The entire manual was only 51 pages long. Fortran has evolved significantly since the 1950s, and the descendant of this fairly simple language continues to be used today. The most recent version of the language (a 603 page ISO standard) was published in 2010.

See also

Posted in Programming Tagged with:

Computing Pi

pi2The digits of \(\pi\) can be computed using the “Spigot algorithm” [1-2]. The interesting thing about this algorithm is that it doesn’t use any floating point computations, only integers.

A Fortran version of the algorithm is given below (a translation of the Pascal program in the reference). It computes the first 10,000 digits of \(\pi\). The nines and predigit business is because the algorithm will occasionally output 10 as the next digit, and the 1 will need to be added to the previous digit. If we know in advance that this won’t occur for a given range of digits, then the code can be greatly simplified. It can also be reformulated to compute multiple digits at a time if we change the base (see [1] for the simplified version in base 10,000).

subroutine compute_pi()

implicit none

integer,parameter :: n = 10000  ! number of digits to compute
integer,parameter :: len = 10*n/3 + 1

integer :: i,j,k,q,x,nines,predigit
integer,dimension(len) :: a

a = 2
nines = 0
predigit = 0

do j=1,n
    q = 0
    do i=len,1,-1
        x = 10*a(i) + q*i
        a(i) = mod(x,2*i-1)
        q = x/(2*i-1)
    end do
    a(1) = mod(q,10)
    q = q/10
    if (q==9) then
        nines = nines+1
    elseif (q==10) then
        write(*,'(I1)',advance='NO') predigit+1
        do k=1,nines
            write(*,'(I1)',advance='NO') 0
        end do
        predigit = 0
        nines = 0
        if (j==2) then
            write(*,'(I1,".")',advance='NO') predigit
        elseif (j/=1) then
            write(*,'(I1)',advance='NO') predigit
        end if
        predigit = q
        if (nines/=0) then
            do k=1,nines
                write(*,'(I1)',advance='NO') 9
            end do
            nines = 0
        end if
    end if
end do
write(*,'(I1)',advance='NO') predigit

end subroutine compute_pi

The algorithm as originally published contained a mistake (len = 10*n/3). The correction is due to [3]. The code above produces the following output:


See also

  1. S. Rabinowitz, Abstract 863–11–482: A Spigot Algorithm for Pi, American Mathematical Society, 12(1991)30, 1991.
  2. S. Rabinowitz, S. Wagon, “A Spigot Algorithm for the Digits of $\pi$“, American Mathematical Monthly, Vol. 102, Issue 3, March 1995, p. 195-203.
  3. J. Arndt, C. Haenel, π Unleashed, Springer, 2000.
Posted in Algorithms Tagged with: , ,

Backward Compatibility

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” — Antoine de Saint-Exupéry

fortranThe Fortran standards committee generally refuses to break backward compatibility when Fortran is updated. This is a good thing (take that, Python), and code written decades ago can still be compiled fine today. However, over the years, various old features of the language have been identified as “obsolescent”, namely:

  • Alternate return
  • Assumed-length character functions
  • CHARACTER*(*) form of CHARACTER declaration
  • Computed GO TO statement
  • DATA statements among executable statements
  • Fixed source form
  • Statement functions
  • ENTRY statement
  • Labeled DO loops [will be obsolescent in Fortran 2015]
  • EQUIVALENCE [will be obsolescent in Fortran 2015]
  • COMMON Blocks [will be obsolescent in Fortran 2015]
  • BLOCK DATA [will be obsolescent in Fortran 2015]

And a small set of features has actually been deleted from the language standard:

  • ASSIGN and assigned GO TO statements
  • Assigned FORMAT specifier
  • Branching to an END IF statement from outside its IF block
  • H edit descriptor
  • PAUSE statement
  • Real and double precision DO control variables and DO loop control expressions
  • Arithmetic IF [will be deleted in Fortran 2015]
  • Shared DO termination and termination on a statement other than END DO or CONTINUE [will be deleted in Fortran 2015]

In practice, all compilers still support all the old features (although special compiler flags may be necessary to use them). Normally, you shouldn’t use any of this junk in new code. But there is still a lot of legacy FORTRAN 77 code out there that people want (or need) to compile. However, as I’ve shown many times in this blog, updating old Fortran code to modern standards is not really that big of a deal.

Fortran example from the 1956 Fortran programmer's reference manual

Fortran example from the 1956 Fortran programmer’s reference manual. It contains two obsolescent (fixed form source and a labeled DO loop) and one deleted Fortran feature (Arithmetic IF). This entire example could be replaced with biga = maxval(a) in modern Fortran.

When the next revision of the language (Fortran 2015) is published, it will mark the first time since Fortran was first standardized in 1966 that we will have two consecutive minor revisions of the language (2008 was also a minor revision). The last major revision of the language was Fortran 2003 over a decade ago. There still is no feature-complete free Fortran 2003 compiler (although gfortran currently does include almost all of the Fortran 2003 standard).

Personally, I would tend to prefer a faster-paced cycle of Fortran language development. I’m not one of those who think the language should include regular expressions or a 2D graphics API (seriously, C++?). But, there are clearly potentially useful things that are missing. I think the standard should finally acknowledge the existence of the file system, and provide intrinsic routines for changing and creating directories, searching for files, etc. Currently, if you want to do anything like that you have to resort to system calls or non-standard extensions provided by your compiler vender (thus making the code less portable). A much more significant upgrade would be better support for generic programming (maybe we’ll get that in Fortran 2025). There are also many other feature proposals out there (see references below).

See also

Posted in Programming Tagged with:

Syntax Highlighting

syntax-with-borderDecently syntax highlighted Fortran code on the internet is hard to come by. None of the major sites where people are likely to visit to learn about Fortran have it:

  • The Google Groups hosting of comp.lang.fortran (I don’t really expect much from this one since it’s just Usenet.)
  • Stack Overflow (we should expect better from them, since they have had syntax highlighting for many other languages for years.) It looks like they are using Google’s code-prettify (which seems to have a pull request ready to provide syntax highlighting for Fortran, so perhaps there is hope?)
  • Intel Fortran compiler documentation [example] (people pay good money for this compiler, and so should ask for better documentation).
  • GFortran documentation (their entire Fortran website looks like it is from the late 1990s, and could certainly use an overhaul).

Luckily GitHub has syntax highlighting for Fortran, as well as the Fortran Wiki.

Personally, I hate looking at non-syntax highlighted code. It’s not aesthetically pleasing and I find it hard to read. On this blog, I’m using a Fortran plugin for SyntaxHighlighter Evolved, which I downloaded somewhere at some point and have modified to account for various newer Fortran language features. It’s not perfect, but it looks pretty good.

Consider this example from the gfortran website:

Now that looks just awful, and not just because they are using ancient syntax such as (/, /), and .eq.. Whereas the following syntax-highlighted one looks great:

program test_all

 implicit none

 logical :: l

 l = all([.true., .true., .true.])
 write(*,*) l
 call section()


  subroutine section()

   integer,dimension(2,3) :: a, b

   a = 1
   b = 1
   b(2,2) = 2
   write(*,*) all(a == b, 1)
   write(*,*) all(a == b, 2)

  end subroutine section

end program test_all

FORD-produced documentation has nice syntax highlighting for Fortran code provided by Pygments (which is written in Python). An example can be found here. Rouge is another code highlighter (written in Ruby) that supports Fortran and can output as HTML. Both Pygments and Rouge are open source and released under permissive licenses.

Posted in Programming Tagged with:

Intel Fortran Compiler 17.0

intel-logo-vector-01Intel has announced the availability of version 17.0 of the Intel Fortran Compiler (part of Intel Parallel Studio XE 2017).  Slowly but surely, the compiler is approaching full support for the current Fortran 2008 standard. New Fortran 2008 features added in this release are:

  • TYPE(intrinsic-type)
  • Pointer initialization
  • Implied-shape PARAMETER arrays
  • Extend EXIT statement to all valid construct names
  • Support BIND(C) in internal procedures

In addition, the compiler now also supports the standard auto-reallocation on assignment by default (previously, you had to use a special compiler flag to enable this behavior).

See also

Posted in Programming Tagged with: ,

JSON-Fortran 5.1


JSON-Fortran 5.1 is out. There are several new features in this release. I added a get_path() routine that can be used to return the path of a variable in a JSON structure. This can be used along with the traverse() routine to do something pseudointeresting: convert a JSON file into a Fortran namelist file. Why would anyone want to do that, you ask? Who knows. Consider the following example:

program why

 use json_module

 implicit none

 type(json_core) :: json
 type(json_value),pointer :: p
 integer :: iunit !! file unit

 write(iunit,'(A)') '&DATA'
 call json%initialize()
 call json%parse(file='data.json', p=p)
 call json%traverse(p,print_json_variable)
 write(iunit,'(A)') '/'


    subroutine print_json_variable(json,p,finished)

    !! A `traverse` routine for printing out all
    !! the variables in a JSON structure.

    implicit none

    class(json_core),intent(inout) :: json
    type(json_value),pointer,intent(in) :: p
    logical(json_LK),intent(out) :: finished

    character(kind=json_CK,len=:),allocatable :: path
    character(kind=json_CK,len=:),allocatable :: value
    logical(json_LK) :: found
    type(json_value),pointer :: child
    integer(json_IK) :: var_type

    call json%get_child(p,child)
    finished = .false.

    !only print the leafs:
    if (.not. associated(child)) then
        call json%get_path(p,path,found,&
        if (found) then
            call json%info(p,var_type=var_type)
            select case (var_type)
            case (json_array, json_object)
                !an empty array or object
                !don't print anything
            case (json_string)
                ! note: strings are returned escaped
                ! without quotes
                call json%get(p,value)
                value = '"'//value//'"'
            case default
                ! get the value as a string
                ! [assumes strict_type_checking=false]
                call json%get(p,value)
            end select
            !check for errors:
            if (json%failed()) then
                finished = .true.
                write(iunit,'(A)') &
                path//json_CK_' = '//value//','
            end if
            finished = .true.
        end if
    end if

    end subroutine print_json_variable

end program why

Here, we are simply traversing the entire JSON structure, and printing out the paths of the leaf nodes using a namelist-style syntax. For the example JSON file:

  "t": 0.0,
  "x": [1.0, 2.0, 3.0],
  "m": 2000.0,
  "name": "foo"

This program will produce the following namelist file:

t = 0.0E+0,
x(1) = 0.1E+1,
x(2) = 0.2E+1,
x(3) = 0.3E+1,
m = 0.2E+4,
name = "foo",

Which could be read using the following Fortran program:

program namelist_test

 use iso_fortran_env, only: wp => real64

 implicit none

 real(wp) :: t,m,x(3)
 integer :: iunit,istat
 character(len=10) :: name

 ! define the namelist:
 namelist /DATA/ t,x,m,name

 ! read the namelist:

end program namelist_test

There is also a new minification option for printing a JSON structure with no extra whitespace. For example:


See also

  • f90nml — A Python module for parsing Fortran namelist files
Posted in Programming Tagged with: , ,

Dynamically Sizing Arrays

Often the need arises to add (or subtract) elements from an array on the fly. Fortran 2008 allows for this to be easily done using standard allocatable arrays. An example for integer arrays is shown here:

integer,dimension(:),allocatable :: x

x = [1,2,3]
x = [x,[4,5,6]] ! x is now [1,2,3,4,5,6]
x = x(1:4)      ! x is now [1,2,3,4]

Note that, if using the Intel compiler, this behavior is not enabled by default for computational efficiency reasons. To enable it you have to use the -assume realloc_lhs compiler flag.

Resizing an array like this carries a performance penalty. When adding a new element, the compiler will likely have to make a temporary copy of the array, deallocate the original and resize it, and then copy over the original elements and the new one. A simple test case is shown here (compiled with gfortran 6.1.0 with -O3 optimization enabled):

program test

implicit none

integer,dimension(:),allocatable :: x
integer :: i

x = [0]
do i=1,100000
    x = [x,i]
end do

end program test

This requires 2.828986 seconds on my laptop (or 35,348 assignments per second). Now, that may be good enough for some applications. However, performance can be improved significantly by allocating the array in chunks, as shown in the following example, where we allocate in chunks of 100 elements, and then resize it to the correct size at the end:

program test

implicit none

integer,dimension(:),allocatable :: x
integer :: i,n

integer,parameter :: chunk_size = 100

n = 0
do i=0,100000
    call add_to(x,i,n,chunk_size,finished=i==100000)
end do


 pure subroutine add_to(vec,val,n,chunk_size,finished)
 implicit none
 integer,dimension(:),allocatable,intent(inout) :: vec
    !! the vector to add to
 integer,intent(in) :: val  
    !! the value to add
 integer,intent(inout) :: n  
    !! counter for last element added to vec.
    !! must be initialized to size(vec)
    !! (or 0 if not allocated) before first call
 integer,intent(in) :: chunk_size  
    !! allocate vec in blocks of this size (>0)
 logical,intent(in) :: finished 
    !! set to true to return vec
    !! as its correct size (n)
 integer,dimension(:),allocatable :: tmp
 if (allocated(vec)) then
     if (n==size(vec)) then
         ! have to add another chunk:
         tmp(1:size(vec)) = vec
         call move_alloc(tmp,vec)
     end if
     n = n + 1
     ! the first element:
     n = 1
 end if
 vec(n) = val
 if (finished) then
     ! set vec to actual size (n):
     if (allocated(tmp)) deallocate(tmp)
     tmp = vec(1:n)
     call move_alloc(tmp,vec)
 end if
 end subroutine add_to

end program test

This requires only 0.022938 seconds (or 4,359,577 assignments per second) which is nearly 123 times faster. Note that we are using the Fortran 2003 move_alloc intrinsic function, which saves us an extra copy operation when the array is resized.

Increasing the chunk size can improve performance even more:


Depending on the specific application, a linked list is another option for dynamically-sized objects.

Posted in Programming Tagged with:

Natural Sorting

Sorting is one of the fundamental problems in computer science, so of course Fortran does not include any intrinsic sorting routine (we’ve got Bessel functions, though!) String sorting is a special case of this problem which includes various choices to consider, for example:

  • Natural or ASCII sorting
  • Case sensitive (e.g., ‘A’<'a') or case insensitive (e.g., 'A'=='a')

finder“Natural” sorting (also called “alphanumeric sorting” means to take into account numeric values in the string, rather than just comparing the ASCII value of each of the characters. This can produce an order that looks more natural to a human for strings that contain numbers. For example, in a “natural” sort, the string “case2.txt” will come before “case100.txt”, since the number 2 comes before the number 100. For example, natural sorting is the method used to sort file names in the MacOS X Finder (see image at right). While, interestingly, an ls -l from a Terminal merely does a basic ASCII sort.

For string sorting routines written in modern Fortran, check out my GitHub project stringsort. This library contains routines for both natural and ASCII string sorting. Natural sorting is achieved by breaking up each string into chunks. A chunk consists of a non-numeric character or a contiguous block of integer characters. A case insensitive search is done by simply converting each character to lowercase before comparing them. I make no claim that the routines are particularly optimized. One limitation is that contiguous integer characters are stored as an integer(INT32) value, which has a maximum value of 2147483647. Although note that it is easy to change the code to use integer(INT64) variables to increase this range up to 9223372036854775807 if necessary. Eliminating integer size restrictions entirely is left as an exercise for the reader.

Consider the following test case:

character(len=*),dimension(6) :: &
    str = [ 'z1.txt  ', &
            'z102.txt', &
            'Z101.txt', &
            'z100.txt', &
            'z10.txt ', &
            'Z11.txt '  ]

This list can be sorted (at least) four different ways:

Case Insensitive





Case Sensitive





Each of these can be done using stringsort with the following subroutine calls:

 call lexical_sort_recursive(str,case_sensitive=.false.)
 call lexical_sort_natural_recursive(str,case_sensitive=.false.)

 call lexical_sort_recursive(str,case_sensitive=.true.)
 call lexical_sort_natural_recursive(str,case_sensitive=.true.)

Original Quicksort algorithm by Tony Hoare, 1961 (Communications of the ACM)

The routines use the quicksort algorithm, which was originally created for sorting strings (specifically words in Russian sentences so they could be looked up in a Russian-English dictionary). The algorithm is easily implemented in modern Fortran using recursion (non-recursive versions were also available before recursion was added to the language in Fortran 90). Quicksort was named one of the top 10 algorithms of the 20th century by the ACM (Fortran was also on the list).

See also

Posted in Programming Tagged with: , ,