This is gsl-ref.info, produced by makeinfo version 4.13 from gsl-ref.texi. INFO-DIR-SECTION Software libraries START-INFO-DIR-ENTRY * gsl-ref: (gsl-ref). GNU Scientific Library - Reference END-INFO-DIR-ENTRY Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 The GSL Team. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being "GNU General Public License" and "Free Software Needs Free Documentation", the Front-Cover text being "A GNU Manual", and with the Back-Cover Text being (a) (see below). A copy of the license is included in the section entitled "GNU Free Documentation License". (a) The Back-Cover Text is: "You have the freedom to copy and modify this GNU Manual."  File: gsl-ref.info, Node: Combination Examples, Next: Combination References and Further Reading, Prev: Reading and writing combinations, Up: Combinations 10.7 Examples ============= The example program below prints all subsets of the set {0,1,2,3} ordered by size. Subsets of the same size are ordered lexicographically. #include #include int main (void) { gsl_combination * c; size_t i; printf ("All subsets of {0,1,2,3} by size:\n") ; for (i = 0; i <= 4; i++) { c = gsl_combination_calloc (4, i); do { printf ("{"); gsl_combination_fprintf (stdout, c, " %u"); printf (" }\n"); } while (gsl_combination_next (c) == GSL_SUCCESS); gsl_combination_free (c); } return 0; } Here is the output from the program, $ ./a.out All subsets of {0,1,2,3} by size: { } { 0 } { 1 } { 2 } { 3 } { 0 1 } { 0 2 } { 0 3 } { 1 2 } { 1 3 } { 2 3 } { 0 1 2 } { 0 1 3 } { 0 2 3 } { 1 2 3 } { 0 1 2 3 } All 16 subsets are generated, and the subsets of each size are sorted lexicographically.  File: gsl-ref.info, Node: Combination References and Further Reading, Prev: Combination Examples, Up: Combinations 10.8 References and Further Reading =================================== Further information on combinations can be found in, Donald L. Kreher, Douglas R. Stinson, `Combinatorial Algorithms: Generation, Enumeration and Search', 1998, CRC Press LLC, ISBN 084933988X  File: gsl-ref.info, Node: Multisets, Next: Sorting, Prev: Combinations, Up: Top 11 Multisets ************ This chapter describes functions for creating and manipulating multisets. A multiset c is represented by an array of k integers in the range 0 to n-1, where each value c_i may occur more than once. The multiset c corresponds to indices of k elements chosen from an n element vector with replacement. In mathematical terms, n is the cardinality of the multiset while k is the maximum multiplicity of any value. Multisets are useful, for example, when iterating over the indices of a k-th order symmetric tensor in n-space. The functions described in this chapter are defined in the header file `gsl_multiset.h'. * Menu: * The Multiset struct:: * Multiset allocation:: * Accessing multiset elements:: * Multiset properties:: * Multiset functions:: * Reading and writing multisets:: * Multiset Examples::  File: gsl-ref.info, Node: The Multiset struct, Next: Multiset allocation, Up: Multisets 11.1 The Multiset struct ======================== A multiset is defined by a structure containing three components, the values of n and k, and a pointer to the multiset array. The elements of the multiset array are all of type `size_t', and are stored in increasing order. The `gsl_multiset' structure looks like this, typedef struct { size_t n; size_t k; size_t *data; } gsl_multiset;  File: gsl-ref.info, Node: Multiset allocation, Next: Accessing multiset elements, Prev: The Multiset struct, Up: Multisets 11.2 Multiset allocation ======================== -- Function: gsl_multiset * gsl_multiset_alloc (size_t N, size_t K) This function allocates memory for a new multiset with parameters N, K. The multiset is not initialized and its elements are undefined. Use the function `gsl_multiset_calloc' if you want to create a multiset which is initialized to the lexicographically first multiset element. A null pointer is returned if insufficient memory is available to create the multiset. -- Function: gsl_multiset * gsl_multiset_calloc (size_t N, size_t K) This function allocates memory for a new multiset with parameters N, K and initializes it to the lexicographically first multiset element. A null pointer is returned if insufficient memory is available to create the multiset. -- Function: void gsl_multiset_init_first (gsl_multiset * C) This function initializes the multiset C to the lexicographically first multiset element, i.e. 0 repeated k times. -- Function: void gsl_multiset_init_last (gsl_multiset * C) This function initializes the multiset C to the lexicographically last multiset element, i.e. n-1 repeated k times. -- Function: void gsl_multiset_free (gsl_multiset * C) This function frees all the memory used by the multiset C. -- Function: int gsl_multiset_memcpy (gsl_multiset * DEST, const gsl_multiset * SRC) This function copies the elements of the multiset SRC into the multiset DEST. The two multisets must have the same size.  File: gsl-ref.info, Node: Accessing multiset elements, Next: Multiset properties, Prev: Multiset allocation, Up: Multisets 11.3 Accessing multiset elements ================================ The following function can be used to access the elements of a multiset. -- Function: size_t gsl_multiset_get (const gsl_multiset * C, const size_t I) This function returns the value of the I-th element of the multiset C. If I lies outside the allowed range of 0 to K-1 then the error handler is invoked and 0 is returned. An inline version of this function is used when `HAVE_INLINE' is defined.  File: gsl-ref.info, Node: Multiset properties, Next: Multiset functions, Prev: Accessing multiset elements, Up: Multisets 11.4 Multiset properties ======================== -- Function: size_t gsl_multiset_n (const gsl_multiset * C) This function returns the range (n) of the multiset C. -- Function: size_t gsl_multiset_k (const gsl_multiset * C) This function returns the number of elements (k) in the multiset C. -- Function: size_t * gsl_multiset_data (const gsl_multiset * C) This function returns a pointer to the array of elements in the multiset C. -- Function: int gsl_multiset_valid (gsl_multiset * C) This function checks that the multiset C is valid. The K elements should lie in the range 0 to N-1, with each value occurring in nondecreasing order.  File: gsl-ref.info, Node: Multiset functions, Next: Reading and writing multisets, Prev: Multiset properties, Up: Multisets 11.5 Multiset functions ======================= -- Function: int gsl_multiset_next (gsl_multiset * C) This function advances the multiset C to the next multiset element in lexicographic order and returns `GSL_SUCCESS'. If no further multisets elements are available it returns `GSL_FAILURE' and leaves C unmodified. Starting with the first multiset and repeatedly applying this function will iterate through all possible multisets of a given order. -- Function: int gsl_multiset_prev (gsl_multiset * C) This function steps backwards from the multiset C to the previous multiset element in lexicographic order, returning `GSL_SUCCESS'. If no previous multiset is available it returns `GSL_FAILURE' and leaves C unmodified.  File: gsl-ref.info, Node: Reading and writing multisets, Next: Multiset Examples, Prev: Multiset functions, Up: Multisets 11.6 Reading and writing multisets ================================== The library provides functions for reading and writing multisets to a file as binary data or formatted text. -- Function: int gsl_multiset_fwrite (FILE * STREAM, const gsl_multiset * C) This function writes the elements of the multiset C to the stream STREAM in binary format. The function returns `GSL_EFAILED' if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_multiset_fread (FILE * STREAM, gsl_multiset * C) This function reads elements from the open stream STREAM into the multiset C in binary format. The multiset C must be preallocated with correct values of n and k since the function uses the size of C to determine how many bytes to read. The function returns `GSL_EFAILED' if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_multiset_fprintf (FILE * STREAM, const gsl_multiset * C, const char * FORMAT) This function writes the elements of the multiset C line-by-line to the stream STREAM using the format specifier FORMAT, which should be suitable for a type of SIZE_T. In ISO C99 the type modifier `z' represents `size_t', so `"%zu\n"' is a suitable format.(1) The function returns `GSL_EFAILED' if there was a problem writing to the file. -- Function: int gsl_multiset_fscanf (FILE * STREAM, gsl_multiset * C) This function reads formatted data from the stream STREAM into the multiset C. The multiset C must be preallocated with correct values of n and k since the function uses the size of C to determine how many numbers to read. The function returns `GSL_EFAILED' if there was a problem reading from the file. ---------- Footnotes ---------- (1) In versions of the GNU C library prior to the ISO C99 standard, the type modifier `Z' was used instead.  File: gsl-ref.info, Node: Multiset Examples, Prev: Reading and writing multisets, Up: Multisets 11.7 Examples ============= The example program below prints all multisets elements containing the values {0,1,2,3} ordered by size. Multiset elements of the same size are ordered lexicographically. #include #include int main (void) { gsl_multiset * c; size_t i; printf ("All multisets of {0,1,2,3} by size:\n") ; for (i = 0; i <= 4; i++) { c = gsl_multiset_calloc (4, i); do { printf ("{"); gsl_multiset_fprintf (stdout, c, " %u"); printf (" }\n"); } while (gsl_multiset_next (c) == GSL_SUCCESS); gsl_multiset_free (c); } return 0; } Here is the output from the program, $ ./a.out all multisets of {0,1,2,3} by size: { } { 0 } { 1 } { 2 } { 3 } { 0 0 } { 0 1 } { 0 2 } { 0 3 } { 1 1 } { 1 2 } { 1 3 } { 2 2 } { 2 3 } { 3 3 } { 0 0 0 } { 0 0 1 } { 0 0 2 } { 0 0 3 } { 0 1 1 } { 0 1 2 } { 0 1 3 } { 0 2 2 } { 0 2 3 } { 0 3 3 } { 1 1 1 } { 1 1 2 } { 1 1 3 } { 1 2 2 } { 1 2 3 } { 1 3 3 } { 2 2 2 } { 2 2 3 } { 2 3 3 } { 3 3 3 } { 0 0 0 0 } { 0 0 0 1 } { 0 0 0 2 } { 0 0 0 3 } { 0 0 1 1 } { 0 0 1 2 } { 0 0 1 3 } { 0 0 2 2 } { 0 0 2 3 } { 0 0 3 3 } { 0 1 1 1 } { 0 1 1 2 } { 0 1 1 3 } { 0 1 2 2 } { 0 1 2 3 } { 0 1 3 3 } { 0 2 2 2 } { 0 2 2 3 } { 0 2 3 3 } { 0 3 3 3 } { 1 1 1 1 } { 1 1 1 2 } { 1 1 1 3 } { 1 1 2 2 } { 1 1 2 3 } { 1 1 3 3 } { 1 2 2 2 } { 1 2 2 3 } { 1 2 3 3 } { 1 3 3 3 } { 2 2 2 2 } { 2 2 2 3 } { 2 2 3 3 } { 2 3 3 3 } { 3 3 3 3 } All 70 multisets are generated and sorted lexicographically.  File: gsl-ref.info, Node: Sorting, Next: BLAS Support, Prev: Multisets, Up: Top 12 Sorting ********** This chapter describes functions for sorting data, both directly and indirectly (using an index). All the functions use the "heapsort" algorithm. Heapsort is an O(N \log N) algorithm which operates in-place and does not require any additional storage. It also provides consistent performance, the running time for its worst-case (ordered data) being not significantly longer than the average and best cases. Note that the heapsort algorithm does not preserve the relative ordering of equal elements--it is an "unstable" sort. However the resulting order of equal elements will be consistent across different platforms when using these functions. * Menu: * Sorting objects:: * Sorting vectors:: * Selecting the k smallest or largest elements:: * Computing the rank:: * Sorting Examples:: * Sorting References and Further Reading::  File: gsl-ref.info, Node: Sorting objects, Next: Sorting vectors, Up: Sorting 12.1 Sorting objects ==================== The following function provides a simple alternative to the standard library function `qsort'. It is intended for systems lacking `qsort', not as a replacement for it. The function `qsort' should be used whenever possible, as it will be faster and can provide stable ordering of equal elements. Documentation for `qsort' is available in the `GNU C Library Reference Manual'. The functions described in this section are defined in the header file `gsl_heapsort.h'. -- Function: void gsl_heapsort (void * ARRAY, size_t COUNT, size_t SIZE, gsl_comparison_fn_t COMPARE) This function sorts the COUNT elements of the array ARRAY, each of size SIZE, into ascending order using the comparison function COMPARE. The type of the comparison function is defined by, int (*gsl_comparison_fn_t) (const void * a, const void * b) A comparison function should return a negative integer if the first argument is less than the second argument, `0' if the two arguments are equal and a positive integer if the first argument is greater than the second argument. For example, the following function can be used to sort doubles into ascending numerical order. int compare_doubles (const double * a, const double * b) { if (*a > *b) return 1; else if (*a < *b) return -1; else return 0; } The appropriate function call to perform the sort is, gsl_heapsort (array, count, sizeof(double), compare_doubles); Note that unlike `qsort' the heapsort algorithm cannot be made into a stable sort by pointer arithmetic. The trick of comparing pointers for equal elements in the comparison function does not work for the heapsort algorithm. The heapsort algorithm performs an internal rearrangement of the data which destroys its initial ordering. -- Function: int gsl_heapsort_index (size_t * P, const void * ARRAY, size_t COUNT, size_t SIZE, gsl_comparison_fn_t COMPARE) This function indirectly sorts the COUNT elements of the array ARRAY, each of size SIZE, into ascending order using the comparison function COMPARE. The resulting permutation is stored in P, an array of length N. The elements of P give the index of the array element which would have been stored in that position if the array had been sorted in place. The first element of P gives the index of the least element in ARRAY, and the last element of P gives the index of the greatest element in ARRAY. The array itself is not changed.  File: gsl-ref.info, Node: Sorting vectors, Next: Selecting the k smallest or largest elements, Prev: Sorting objects, Up: Sorting 12.2 Sorting vectors ==================== The following functions will sort the elements of an array or vector, either directly or indirectly. They are defined for all real and integer types using the normal suffix rules. For example, the `float' versions of the array functions are `gsl_sort_float' and `gsl_sort_float_index'. The corresponding vector functions are `gsl_sort_vector_float' and `gsl_sort_vector_float_index'. The prototypes are available in the header files `gsl_sort_float.h' `gsl_sort_vector_float.h'. The complete set of prototypes can be included using the header files `gsl_sort.h' and `gsl_sort_vector.h'. There are no functions for sorting complex arrays or vectors, since the ordering of complex numbers is not uniquely defined. To sort a complex vector by magnitude compute a real vector containing the magnitudes of the complex elements, and sort this vector indirectly. The resulting index gives the appropriate ordering of the original complex vector. -- Function: void gsl_sort (double * DATA, size_t STRIDE, size_t N) This function sorts the N elements of the array DATA with stride STRIDE into ascending numerical order. -- Function: void gsl_sort_vector (gsl_vector * V) This function sorts the elements of the vector V into ascending numerical order. -- Function: void gsl_sort_index (size_t * P, const double * DATA, size_t STRIDE, size_t N) This function indirectly sorts the N elements of the array DATA with stride STRIDE into ascending order, storing the resulting permutation in P. The array P must be allocated with a sufficient length to store the N elements of the permutation. The elements of P give the index of the array element which would have been stored in that position if the array had been sorted in place. The array DATA is not changed. -- Function: int gsl_sort_vector_index (gsl_permutation * P, const gsl_vector * V) This function indirectly sorts the elements of the vector V into ascending order, storing the resulting permutation in P. The elements of P give the index of the vector element which would have been stored in that position if the vector had been sorted in place. The first element of P gives the index of the least element in V, and the last element of P gives the index of the greatest element in V. The vector V is not changed.  File: gsl-ref.info, Node: Selecting the k smallest or largest elements, Next: Computing the rank, Prev: Sorting vectors, Up: Sorting 12.3 Selecting the k smallest or largest elements ================================================= The functions described in this section select the k smallest or largest elements of a data set of size N. The routines use an O(kN) direct insertion algorithm which is suited to subsets that are small compared with the total size of the dataset. For example, the routines are useful for selecting the 10 largest values from one million data points, but not for selecting the largest 100,000 values. If the subset is a significant part of the total dataset it may be faster to sort all the elements of the dataset directly with an O(N \log N) algorithm and obtain the smallest or largest values that way. -- Function: int gsl_sort_smallest (double * DEST, size_t K, const double * SRC, size_t STRIDE, size_t N) This function copies the K smallest elements of the array SRC, of size N and stride STRIDE, in ascending numerical order into the array DEST. The size K of the subset must be less than or equal to N. The data SRC is not modified by this operation. -- Function: int gsl_sort_largest (double * DEST, size_t K, const double * SRC, size_t STRIDE, size_t N) This function copies the K largest elements of the array SRC, of size N and stride STRIDE, in descending numerical order into the array DEST. K must be less than or equal to N. The data SRC is not modified by this operation. -- Function: int gsl_sort_vector_smallest (double * DEST, size_t K, const gsl_vector * V) -- Function: int gsl_sort_vector_largest (double * DEST, size_t K, const gsl_vector * V) These functions copy the K smallest or largest elements of the vector V into the array DEST. K must be less than or equal to the length of the vector V. The following functions find the indices of the k smallest or largest elements of a dataset, -- Function: int gsl_sort_smallest_index (size_t * P, size_t K, const double * SRC, size_t STRIDE, size_t N) This function stores the indices of the K smallest elements of the array SRC, of size N and stride STRIDE, in the array P. The indices are chosen so that the corresponding data is in ascending numerical order. K must be less than or equal to N. The data SRC is not modified by this operation. -- Function: int gsl_sort_largest_index (size_t * P, size_t K, const double * SRC, size_t STRIDE, size_t N) This function stores the indices of the K largest elements of the array SRC, of size N and stride STRIDE, in the array P. The indices are chosen so that the corresponding data is in descending numerical order. K must be less than or equal to N. The data SRC is not modified by this operation. -- Function: int gsl_sort_vector_smallest_index (size_t * P, size_t K, const gsl_vector * V) -- Function: int gsl_sort_vector_largest_index (size_t * P, size_t K, const gsl_vector * V) These functions store the indices of the K smallest or largest elements of the vector V in the array P. K must be less than or equal to the length of the vector V.  File: gsl-ref.info, Node: Computing the rank, Next: Sorting Examples, Prev: Selecting the k smallest or largest elements, Up: Sorting 12.4 Computing the rank ======================= The "rank" of an element is its order in the sorted data. The rank is the inverse of the index permutation, P. It can be computed using the following algorithm, for (i = 0; i < p->size; i++) { size_t pi = p->data[i]; rank->data[pi] = i; } This can be computed directly from the function `gsl_permutation_inverse(rank,p)'. The following function will print the rank of each element of the vector V, void print_rank (gsl_vector * v) { size_t i; size_t n = v->size; gsl_permutation * perm = gsl_permutation_alloc(n); gsl_permutation * rank = gsl_permutation_alloc(n); gsl_sort_vector_index (perm, v); gsl_permutation_inverse (rank, perm); for (i = 0; i < n; i++) { double vi = gsl_vector_get(v, i); printf ("element = %d, value = %g, rank = %d\n", i, vi, rank->data[i]); } gsl_permutation_free (perm); gsl_permutation_free (rank); }  File: gsl-ref.info, Node: Sorting Examples, Next: Sorting References and Further Reading, Prev: Computing the rank, Up: Sorting 12.5 Examples ============= The following example shows how to use the permutation P to print the elements of the vector V in ascending order, gsl_sort_vector_index (p, v); for (i = 0; i < v->size; i++) { double vpi = gsl_vector_get (v, p->data[i]); printf ("order = %d, value = %g\n", i, vpi); } The next example uses the function `gsl_sort_smallest' to select the 5 smallest numbers from 100000 uniform random variates stored in an array, #include #include int main (void) { const gsl_rng_type * T; gsl_rng * r; size_t i, k = 5, N = 100000; double * x = malloc (N * sizeof(double)); double * small = malloc (k * sizeof(double)); gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); for (i = 0; i < N; i++) { x[i] = gsl_rng_uniform(r); } gsl_sort_smallest (small, k, x, 1, N); printf ("%d smallest values from %d\n", k, N); for (i = 0; i < k; i++) { printf ("%d: %.18f\n", i, small[i]); } free (x); free (small); gsl_rng_free (r); return 0; } The output lists the 5 smallest values, in ascending order, $ ./a.out 5 smallest values from 100000 0: 0.000003489200025797 1: 0.000008199829608202 2: 0.000008953968062997 3: 0.000010712770745158 4: 0.000033531803637743  File: gsl-ref.info, Node: Sorting References and Further Reading, Prev: Sorting Examples, Up: Sorting 12.6 References and Further Reading =================================== The subject of sorting is covered extensively in Knuth's `Sorting and Searching', Donald E. Knuth, `The Art of Computer Programming: Sorting and Searching' (Vol 3, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896850. The Heapsort algorithm is described in the following book, Robert Sedgewick, `Algorithms in C', Addison-Wesley, ISBN 0201514257.  File: gsl-ref.info, Node: BLAS Support, Next: Linear Algebra, Prev: Sorting, Up: Top 13 BLAS Support *************** The Basic Linear Algebra Subprograms (BLAS) define a set of fundamental operations on vectors and matrices which can be used to create optimized higher-level linear algebra functionality. The library provides a low-level layer which corresponds directly to the C-language BLAS standard, referred to here as "CBLAS", and a higher-level interface for operations on GSL vectors and matrices. Users who are interested in simple operations on GSL vector and matrix objects should use the high-level layer described in this chapter. The functions are declared in the file `gsl_blas.h' and should satisfy the needs of most users. Note that GSL matrices are implemented using dense-storage so the interface only includes the corresponding dense-storage BLAS functions. The full BLAS functionality for band-format and packed-format matrices is available through the low-level CBLAS interface. Similarly, GSL vectors are restricted to positive strides, whereas the low-level CBLAS interface supports negative strides as specified in the BLAS standard.(1) The interface for the `gsl_cblas' layer is specified in the file `gsl_cblas.h'. This interface corresponds to the BLAS Technical Forum's standard for the C interface to legacy BLAS implementations. Users who have access to other conforming CBLAS implementations can use these in place of the version provided by the library. Note that users who have only a Fortran BLAS library can use a CBLAS conformant wrapper to convert it into a CBLAS library. A reference CBLAS wrapper for legacy Fortran implementations exists as part of the CBLAS standard and can be obtained from Netlib. The complete set of CBLAS functions is listed in an appendix (*note GSL CBLAS Library::). There are three levels of BLAS operations, Level 1 Vector operations, e.g. y = \alpha x + y Level 2 Matrix-vector operations, e.g. y = \alpha A x + \beta y Level 3 Matrix-matrix operations, e.g. C = \alpha A B + C Each routine has a name which specifies the operation, the type of matrices involved and their precisions. Some of the most common operations and their names are given below, DOT scalar product, x^T y AXPY vector sum, \alpha x + y MV matrix-vector product, A x SV matrix-vector solve, inv(A) x MM matrix-matrix product, A B SM matrix-matrix solve, inv(A) B The types of matrices are, GE general GB general band SY symmetric SB symmetric band SP symmetric packed HE hermitian HB hermitian band HP hermitian packed TR triangular TB triangular band TP triangular packed Each operation is defined for four precisions, S single real D double real C single complex Z double complex Thus, for example, the name SGEMM stands for "single-precision general matrix-matrix multiply" and ZGEMM stands for "double-precision complex matrix-matrix multiply". Note that the vector and matrix arguments to BLAS functions must not be aliased, as the results are undefined when the underlying arrays overlap (*note Aliasing of arrays::). * Menu: * GSL BLAS Interface:: * BLAS Examples:: * BLAS References and Further Reading:: ---------- Footnotes ---------- (1) In the low-level CBLAS interface, a negative stride accesses the vector elements in reverse order, i.e. the i-th element is given by (N-i)*|incx| for incx < 0.  File: gsl-ref.info, Node: GSL BLAS Interface, Next: BLAS Examples, Up: BLAS Support 13.1 GSL BLAS Interface ======================= GSL provides dense vector and matrix objects, based on the relevant built-in types. The library provides an interface to the BLAS operations which apply to these objects. The interface to this functionality is given in the file `gsl_blas.h'. * Menu: * Level 1 GSL BLAS Interface:: * Level 2 GSL BLAS Interface:: * Level 3 GSL BLAS Interface::  File: gsl-ref.info, Node: Level 1 GSL BLAS Interface, Next: Level 2 GSL BLAS Interface, Up: GSL BLAS Interface 13.1.1 Level 1 -------------- -- Function: int gsl_blas_sdsdot (float ALPHA, const gsl_vector_float * X, const gsl_vector_float * Y, float * RESULT) This function computes the sum \alpha + x^T y for the vectors X and Y, returning the result in RESULT. -- Function: int gsl_blas_sdot (const gsl_vector_float * X, const gsl_vector_float * Y, float * RESULT) -- Function: int gsl_blas_dsdot (const gsl_vector_float * X, const gsl_vector_float * Y, double * RESULT) -- Function: int gsl_blas_ddot (const gsl_vector * X, const gsl_vector * Y, double * RESULT) These functions compute the scalar product x^T y for the vectors X and Y, returning the result in RESULT. -- Function: int gsl_blas_cdotu (const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_complex_float * DOTU) -- Function: int gsl_blas_zdotu (const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_complex * DOTU) These functions compute the complex scalar product x^T y for the vectors X and Y, returning the result in DOTU -- Function: int gsl_blas_cdotc (const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_complex_float * DOTC) -- Function: int gsl_blas_zdotc (const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_complex * DOTC) These functions compute the complex conjugate scalar product x^H y for the vectors X and Y, returning the result in DOTC -- Function: float gsl_blas_snrm2 (const gsl_vector_float * X) -- Function: double gsl_blas_dnrm2 (const gsl_vector * X) These functions compute the Euclidean norm ||x||_2 = \sqrt {\sum x_i^2} of the vector X. -- Function: float gsl_blas_scnrm2 (const gsl_vector_complex_float * X) -- Function: double gsl_blas_dznrm2 (const gsl_vector_complex * X) These functions compute the Euclidean norm of the complex vector X, ||x||_2 = \sqrt {\sum (\Re(x_i)^2 + \Im(x_i)^2)}. -- Function: float gsl_blas_sasum (const gsl_vector_float * X) -- Function: double gsl_blas_dasum (const gsl_vector * X) These functions compute the absolute sum \sum |x_i| of the elements of the vector X. -- Function: float gsl_blas_scasum (const gsl_vector_complex_float * X) -- Function: double gsl_blas_dzasum (const gsl_vector_complex * X) These functions compute the sum of the magnitudes of the real and imaginary parts of the complex vector X, \sum |\Re(x_i)| + |\Im(x_i)|. -- Function: CBLAS_INDEX_t gsl_blas_isamax (const gsl_vector_float * X) -- Function: CBLAS_INDEX_t gsl_blas_idamax (const gsl_vector * X) -- Function: CBLAS_INDEX_t gsl_blas_icamax (const gsl_vector_complex_float * X) -- Function: CBLAS_INDEX_t gsl_blas_izamax (const gsl_vector_complex * X) These functions return the index of the largest element of the vector X. The largest element is determined by its absolute magnitude for real vectors and by the sum of the magnitudes of the real and imaginary parts |\Re(x_i)| + |\Im(x_i)| for complex vectors. If the largest value occurs several times then the index of the first occurrence is returned. -- Function: int gsl_blas_sswap (gsl_vector_float * X, gsl_vector_float * Y) -- Function: int gsl_blas_dswap (gsl_vector * X, gsl_vector * Y) -- Function: int gsl_blas_cswap (gsl_vector_complex_float * X, gsl_vector_complex_float * Y) -- Function: int gsl_blas_zswap (gsl_vector_complex * X, gsl_vector_complex * Y) These functions exchange the elements of the vectors X and Y. -- Function: int gsl_blas_scopy (const gsl_vector_float * X, gsl_vector_float * Y) -- Function: int gsl_blas_dcopy (const gsl_vector * X, gsl_vector * Y) -- Function: int gsl_blas_ccopy (const gsl_vector_complex_float * X, gsl_vector_complex_float * Y) -- Function: int gsl_blas_zcopy (const gsl_vector_complex * X, gsl_vector_complex * Y) These functions copy the elements of the vector X into the vector Y. -- Function: int gsl_blas_saxpy (float ALPHA, const gsl_vector_float * X, gsl_vector_float * Y) -- Function: int gsl_blas_daxpy (double ALPHA, const gsl_vector * X, gsl_vector * Y) -- Function: int gsl_blas_caxpy (const gsl_complex_float ALPHA, const gsl_vector_complex_float * X, gsl_vector_complex_float * Y) -- Function: int gsl_blas_zaxpy (const gsl_complex ALPHA, const gsl_vector_complex * X, gsl_vector_complex * Y) These functions compute the sum y = \alpha x + y for the vectors X and Y. -- Function: void gsl_blas_sscal (float ALPHA, gsl_vector_float * X) -- Function: void gsl_blas_dscal (double ALPHA, gsl_vector * X) -- Function: void gsl_blas_cscal (const gsl_complex_float ALPHA, gsl_vector_complex_float * X) -- Function: void gsl_blas_zscal (const gsl_complex ALPHA, gsl_vector_complex * X) -- Function: void gsl_blas_csscal (float ALPHA, gsl_vector_complex_float * X) -- Function: void gsl_blas_zdscal (double ALPHA, gsl_vector_complex * X) These functions rescale the vector X by the multiplicative factor ALPHA. -- Function: int gsl_blas_srotg (float A[], float B[], float C[], float S[]) -- Function: int gsl_blas_drotg (double A[], double B[], double C[], double S[]) These functions compute a Givens rotation (c,s) which zeroes the vector (a,b), [ c s ] [ a ] = [ r ] [ -s c ] [ b ] [ 0 ] The variables A and B are overwritten by the routine. -- Function: int gsl_blas_srot (gsl_vector_float * X, gsl_vector_float * Y, float C, float S) -- Function: int gsl_blas_drot (gsl_vector * X, gsl_vector * Y, const double C, const double S) These functions apply a Givens rotation (x', y') = (c x + s y, -s x + c y) to the vectors X, Y. -- Function: int gsl_blas_srotmg (float D1[], float D2[], float B1[], float B2, float P[]) -- Function: int gsl_blas_drotmg (double D1[], double D2[], double B1[], double B2, double P[]) These functions compute a modified Givens transformation. The modified Givens transformation is defined in the original Level-1 BLAS specification, given in the references. -- Function: int gsl_blas_srotm (gsl_vector_float * X, gsl_vector_float * Y, const float P[]) -- Function: int gsl_blas_drotm (gsl_vector * X, gsl_vector * Y, const double P[]) These functions apply a modified Givens transformation.  File: gsl-ref.info, Node: Level 2 GSL BLAS Interface, Next: Level 3 GSL BLAS Interface, Prev: Level 1 GSL BLAS Interface, Up: GSL BLAS Interface 13.1.2 Level 2 -------------- -- Function: int gsl_blas_sgemv (CBLAS_TRANSPOSE_t TRANSA, float ALPHA, const gsl_matrix_float * A, const gsl_vector_float * X, float BETA, gsl_vector_float * Y) -- Function: int gsl_blas_dgemv (CBLAS_TRANSPOSE_t TRANSA, double ALPHA, const gsl_matrix * A, const gsl_vector * X, double BETA, gsl_vector * Y) -- Function: int gsl_blas_cgemv (CBLAS_TRANSPOSE_t TRANSA, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_vector_complex_float * X, const gsl_complex_float BETA, gsl_vector_complex_float * Y) -- Function: int gsl_blas_zgemv (CBLAS_TRANSPOSE_t TRANSA, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_vector_complex * X, const gsl_complex BETA, gsl_vector_complex * Y) These functions compute the matrix-vector product and sum y = \alpha op(A) x + \beta y, where op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. -- Function: int gsl_blas_strmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_float * A, gsl_vector_float * X) -- Function: int gsl_blas_dtrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix * A, gsl_vector * X) -- Function: int gsl_blas_ctrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex_float * A, gsl_vector_complex_float * X) -- Function: int gsl_blas_ztrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex * A, gsl_vector_complex * X) These functions compute the matrix-vector product x = op(A) x for the triangular matrix A, where op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. When UPLO is `CblasUpper' then the upper triangle of A is used, and when UPLO is `CblasLower' then the lower triangle of A is used. If DIAG is `CblasNonUnit' then the diagonal of the matrix is used, but if DIAG is `CblasUnit' then the diagonal elements of the matrix A are taken as unity and are not referenced. -- Function: int gsl_blas_strsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_float * A, gsl_vector_float * X) -- Function: int gsl_blas_dtrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix * A, gsl_vector * X) -- Function: int gsl_blas_ctrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex_float * A, gsl_vector_complex_float * X) -- Function: int gsl_blas_ztrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex * A, gsl_vector_complex * X) These functions compute inv(op(A)) x for X, where op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. When UPLO is `CblasUpper' then the upper triangle of A is used, and when UPLO is `CblasLower' then the lower triangle of A is used. If DIAG is `CblasNonUnit' then the diagonal of the matrix is used, but if DIAG is `CblasUnit' then the diagonal elements of the matrix A are taken as unity and are not referenced. -- Function: int gsl_blas_ssymv (CBLAS_UPLO_t UPLO, float ALPHA, const gsl_matrix_float * A, const gsl_vector_float * X, float BETA, gsl_vector_float * Y) -- Function: int gsl_blas_dsymv (CBLAS_UPLO_t UPLO, double ALPHA, const gsl_matrix * A, const gsl_vector * X, double BETA, gsl_vector * Y) These functions compute the matrix-vector product and sum y = \alpha A x + \beta y for the symmetric matrix A. Since the matrix A is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. -- Function: int gsl_blas_chemv (CBLAS_UPLO_t UPLO, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_vector_complex_float * X, const gsl_complex_float BETA, gsl_vector_complex_float * Y) -- Function: int gsl_blas_zhemv (CBLAS_UPLO_t UPLO, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_vector_complex * X, const gsl_complex BETA, gsl_vector_complex * Y) These functions compute the matrix-vector product and sum y = \alpha A x + \beta y for the hermitian matrix A. Since the matrix A is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. The imaginary elements of the diagonal are automatically assumed to be zero and are not referenced. -- Function: int gsl_blas_sger (float ALPHA, const gsl_vector_float * X, const gsl_vector_float * Y, gsl_matrix_float * A) -- Function: int gsl_blas_dger (double ALPHA, const gsl_vector * X, const gsl_vector * Y, gsl_matrix * A) -- Function: int gsl_blas_cgeru (const gsl_complex_float ALPHA, const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_matrix_complex_float * A) -- Function: int gsl_blas_zgeru (const gsl_complex ALPHA, const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_matrix_complex * A) These functions compute the rank-1 update A = \alpha x y^T + A of the matrix A. -- Function: int gsl_blas_cgerc (const gsl_complex_float ALPHA, const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_matrix_complex_float * A) -- Function: int gsl_blas_zgerc (const gsl_complex ALPHA, const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_matrix_complex * A) These functions compute the conjugate rank-1 update A = \alpha x y^H + A of the matrix A. -- Function: int gsl_blas_ssyr (CBLAS_UPLO_t UPLO, float ALPHA, const gsl_vector_float * X, gsl_matrix_float * A) -- Function: int gsl_blas_dsyr (CBLAS_UPLO_t UPLO, double ALPHA, const gsl_vector * X, gsl_matrix * A) These functions compute the symmetric rank-1 update A = \alpha x x^T + A of the symmetric matrix A. Since the matrix A is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. -- Function: int gsl_blas_cher (CBLAS_UPLO_t UPLO, float ALPHA, const gsl_vector_complex_float * X, gsl_matrix_complex_float * A) -- Function: int gsl_blas_zher (CBLAS_UPLO_t UPLO, double ALPHA, const gsl_vector_complex * X, gsl_matrix_complex * A) These functions compute the hermitian rank-1 update A = \alpha x x^H + A of the hermitian matrix A. Since the matrix A is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. The imaginary elements of the diagonal are automatically set to zero. -- Function: int gsl_blas_ssyr2 (CBLAS_UPLO_t UPLO, float ALPHA, const gsl_vector_float * X, const gsl_vector_float * Y, gsl_matrix_float * A) -- Function: int gsl_blas_dsyr2 (CBLAS_UPLO_t UPLO, double ALPHA, const gsl_vector * X, const gsl_vector * Y, gsl_matrix * A) These functions compute the symmetric rank-2 update A = \alpha x y^T + \alpha y x^T + A of the symmetric matrix A. Since the matrix A is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. -- Function: int gsl_blas_cher2 (CBLAS_UPLO_t UPLO, const gsl_complex_float ALPHA, const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_matrix_complex_float * A) -- Function: int gsl_blas_zher2 (CBLAS_UPLO_t UPLO, const gsl_complex ALPHA, const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_matrix_complex * A) These functions compute the hermitian rank-2 update A = \alpha x y^H + \alpha^* y x^H + A of the hermitian matrix A. Since the matrix A is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. The imaginary elements of the diagonal are automatically set to zero.  File: gsl-ref.info, Node: Level 3 GSL BLAS Interface, Prev: Level 2 GSL BLAS Interface, Up: GSL BLAS Interface 13.1.3 Level 3 -------------- -- Function: int gsl_blas_sgemm (CBLAS_TRANSPOSE_t TRANSA, CBLAS_TRANSPOSE_t TRANSB, float ALPHA, const gsl_matrix_float * A, const gsl_matrix_float * B, float BETA, gsl_matrix_float * C) -- Function: int gsl_blas_dgemm (CBLAS_TRANSPOSE_t TRANSA, CBLAS_TRANSPOSE_t TRANSB, double ALPHA, const gsl_matrix * A, const gsl_matrix * B, double BETA, gsl_matrix * C) -- Function: int gsl_blas_cgemm (CBLAS_TRANSPOSE_t TRANSA, CBLAS_TRANSPOSE_t TRANSB, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, const gsl_complex_float BETA, gsl_matrix_complex_float * C) -- Function: int gsl_blas_zgemm (CBLAS_TRANSPOSE_t TRANSA, CBLAS_TRANSPOSE_t TRANSB, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute the matrix-matrix product and sum C = \alpha op(A) op(B) + \beta C where op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans' and similarly for the parameter TRANSB. -- Function: int gsl_blas_ssymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, float ALPHA, const gsl_matrix_float * A, const gsl_matrix_float * B, float BETA, gsl_matrix_float * C) -- Function: int gsl_blas_dsymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, double ALPHA, const gsl_matrix * A, const gsl_matrix * B, double BETA, gsl_matrix * C) -- Function: int gsl_blas_csymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, const gsl_complex_float BETA, gsl_matrix_complex_float * C) -- Function: int gsl_blas_zsymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute the matrix-matrix product and sum C = \alpha A B + \beta C for SIDE is `CblasLeft' and C = \alpha B A + \beta C for SIDE is `CblasRight', where the matrix A is symmetric. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. -- Function: int gsl_blas_chemm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, const gsl_complex_float BETA, gsl_matrix_complex_float * C) -- Function: int gsl_blas_zhemm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute the matrix-matrix product and sum C = \alpha A B + \beta C for SIDE is `CblasLeft' and C = \alpha B A + \beta C for SIDE is `CblasRight', where the matrix A is hermitian. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. The imaginary elements of the diagonal are automatically set to zero. -- Function: int gsl_blas_strmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, float ALPHA, const gsl_matrix_float * A, gsl_matrix_float * B) -- Function: int gsl_blas_dtrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, double ALPHA, const gsl_matrix * A, gsl_matrix * B) -- Function: int gsl_blas_ctrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, gsl_matrix_complex_float * B) -- Function: int gsl_blas_ztrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_complex ALPHA, const gsl_matrix_complex * A, gsl_matrix_complex * B) These functions compute the matrix-matrix product B = \alpha op(A) B for SIDE is `CblasLeft' and B = \alpha B op(A) for SIDE is `CblasRight'. The matrix A is triangular and op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. When UPLO is `CblasUpper' then the upper triangle of A is used, and when UPLO is `CblasLower' then the lower triangle of A is used. If DIAG is `CblasNonUnit' then the diagonal of A is used, but if DIAG is `CblasUnit' then the diagonal elements of the matrix A are taken as unity and are not referenced. -- Function: int gsl_blas_strsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, float ALPHA, const gsl_matrix_float * A, gsl_matrix_float * B) -- Function: int gsl_blas_dtrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, double ALPHA, const gsl_matrix * A, gsl_matrix * B) -- Function: int gsl_blas_ctrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, gsl_matrix_complex_float * B) -- Function: int gsl_blas_ztrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_complex ALPHA, const gsl_matrix_complex * A, gsl_matrix_complex * B) These functions compute the inverse-matrix matrix product B = \alpha op(inv(A))B for SIDE is `CblasLeft' and B = \alpha B op(inv(A)) for SIDE is `CblasRight'. The matrix A is triangular and op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. When UPLO is `CblasUpper' then the upper triangle of A is used, and when UPLO is `CblasLower' then the lower triangle of A is used. If DIAG is `CblasNonUnit' then the diagonal of A is used, but if DIAG is `CblasUnit' then the diagonal elements of the matrix A are taken as unity and are not referenced. -- Function: int gsl_blas_ssyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, float ALPHA, const gsl_matrix_float * A, float BETA, gsl_matrix_float * C) -- Function: int gsl_blas_dsyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, double ALPHA, const gsl_matrix * A, double BETA, gsl_matrix * C) -- Function: int gsl_blas_csyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_complex_float BETA, gsl_matrix_complex_float * C) -- Function: int gsl_blas_zsyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute a rank-k update of the symmetric matrix C, C = \alpha A A^T + \beta C when TRANS is `CblasNoTrans' and C = \alpha A^T A + \beta C when TRANS is `CblasTrans'. Since the matrix C is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of C are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of C are used. -- Function: int gsl_blas_cherk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, float ALPHA, const gsl_matrix_complex_float * A, float BETA, gsl_matrix_complex_float * C) -- Function: int gsl_blas_zherk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, double ALPHA, const gsl_matrix_complex * A, double BETA, gsl_matrix_complex * C) These functions compute a rank-k update of the hermitian matrix C, C = \alpha A A^H + \beta C when TRANS is `CblasNoTrans' and C = \alpha A^H A + \beta C when TRANS is `CblasConjTrans'. Since the matrix C is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of C are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of C are used. The imaginary elements of the diagonal are automatically set to zero. -- Function: int gsl_blas_ssyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, float ALPHA, const gsl_matrix_float * A, const gsl_matrix_float * B, float BETA, gsl_matrix_float * C) -- Function: int gsl_blas_dsyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, double ALPHA, const gsl_matrix * A, const gsl_matrix * B, double BETA, gsl_matrix * C) -- Function: int gsl_blas_csyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, const gsl_complex_float BETA, gsl_matrix_complex_float * C) -- Function: int gsl_blas_zsyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute a rank-2k update of the symmetric matrix C, C = \alpha A B^T + \alpha B A^T + \beta C when TRANS is `CblasNoTrans' and C = \alpha A^T B + \alpha B^T A + \beta C when TRANS is `CblasTrans'. Since the matrix C is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of C are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of C are used. -- Function: int gsl_blas_cher2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, float BETA, gsl_matrix_complex_float * C) -- Function: int gsl_blas_zher2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, double BETA, gsl_matrix_complex * C) These functions compute a rank-2k update of the hermitian matrix C, C = \alpha A B^H + \alpha^* B A^H + \beta C when TRANS is `CblasNoTrans' and C = \alpha A^H B + \alpha^* B^H A + \beta C when TRANS is `CblasConjTrans'. Since the matrix C is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of C are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of C are used. The imaginary elements of the diagonal are automatically set to zero.  File: gsl-ref.info, Node: BLAS Examples, Next: BLAS References and Further Reading, Prev: GSL BLAS Interface, Up: BLAS Support 13.2 Examples ============= The following program computes the product of two matrices using the Level-3 BLAS function DGEMM, [ 0.11 0.12 0.13 ] [ 1011 1012 ] [ 367.76 368.12 ] [ 0.21 0.22 0.23 ] [ 1021 1022 ] = [ 674.06 674.72 ] [ 1031 1032 ] The matrices are stored in row major order, according to the C convention for arrays. #include #include int main (void) { double a[] = { 0.11, 0.12, 0.13, 0.21, 0.22, 0.23 }; double b[] = { 1011, 1012, 1021, 1022, 1031, 1032 }; double c[] = { 0.00, 0.00, 0.00, 0.00 }; gsl_matrix_view A = gsl_matrix_view_array(a, 2, 3); gsl_matrix_view B = gsl_matrix_view_array(b, 3, 2); gsl_matrix_view C = gsl_matrix_view_array(c, 2, 2); /* Compute C = A B */ gsl_blas_dgemm (CblasNoTrans, CblasNoTrans, 1.0, &A.matrix, &B.matrix, 0.0, &C.matrix); printf ("[ %g, %g\n", c[0], c[1]); printf (" %g, %g ]\n", c[2], c[3]); return 0; } Here is the output from the program, $ ./a.out [ 367.76, 368.12 674.06, 674.72 ]  File: gsl-ref.info, Node: BLAS References and Further Reading, Prev: BLAS Examples, Up: BLAS Support 13.3 References and Further Reading =================================== Information on the BLAS standards, including both the legacy and updated interface standards, is available online from the BLAS Homepage and BLAS Technical Forum web-site. `BLAS Homepage' `http://www.netlib.org/blas/' `BLAS Technical Forum' `http://www.netlib.org/blas/blast-forum/' The following papers contain the specifications for Level 1, Level 2 and Level 3 BLAS. C. Lawson, R. Hanson, D. Kincaid, F. Krogh, "Basic Linear Algebra Subprograms for Fortran Usage", `ACM Transactions on Mathematical Software', Vol. 5 (1979), Pages 308-325. J.J. Dongarra, J. DuCroz, S. Hammarling, R. Hanson, "An Extended Set of Fortran Basic Linear Algebra Subprograms", `ACM Transactions on Mathematical Software', Vol. 14, No. 1 (1988), Pages 1-32. J.J. Dongarra, I. Duff, J. DuCroz, S. Hammarling, "A Set of Level 3 Basic Linear Algebra Subprograms", `ACM Transactions on Mathematical Software', Vol. 16 (1990), Pages 1-28. Postscript versions of the latter two papers are available from `http://www.netlib.org/blas/'. A CBLAS wrapper for Fortran BLAS libraries is available from the same location.  File: gsl-ref.info, Node: Linear Algebra, Next: Eigensystems, Prev: BLAS Support, Up: Top 14 Linear Algebra ***************** This chapter describes functions for solving linear systems. The library provides linear algebra operations which operate directly on the `gsl_vector' and `gsl_matrix' objects. These routines use the standard algorithms from Golub & Van Loan's `Matrix Computations' with Level-1 and Level-2 BLAS calls for efficiency. The functions described in this chapter are declared in the header file `gsl_linalg.h'. * Menu: * LU Decomposition:: * QR Decomposition:: * QR Decomposition with Column Pivoting:: * Singular Value Decomposition:: * Cholesky Decomposition:: * Tridiagonal Decomposition of Real Symmetric Matrices:: * Tridiagonal Decomposition of Hermitian Matrices:: * Hessenberg Decomposition of Real Matrices:: * Hessenberg-Triangular Decomposition of Real Matrices:: * Bidiagonalization:: * Householder Transformations:: * Householder solver for linear systems:: * Tridiagonal Systems:: * Balancing:: * Linear Algebra Examples:: * Linear Algebra References and Further Reading::  File: gsl-ref.info, Node: LU Decomposition, Next: QR Decomposition, Up: Linear Algebra 14.1 LU Decomposition ===================== A general square matrix A has an LU decomposition into upper and lower triangular matrices, P A = L U where P is a permutation matrix, L is unit lower triangular matrix and U is upper triangular matrix. For square matrices this decomposition can be used to convert the linear system A x = b into a pair of triangular systems (L y = P b, U x = y), which can be solved by forward and back-substitution. Note that the LU decomposition is valid for singular matrices. -- Function: int gsl_linalg_LU_decomp (gsl_matrix * A, gsl_permutation * P, int * SIGNUM) -- Function: int gsl_linalg_complex_LU_decomp (gsl_matrix_complex * A, gsl_permutation * P, int * SIGNUM) These functions factorize the square matrix A into the LU decomposition PA = LU. On output the diagonal and upper triangular part of the input matrix A contain the matrix U. The lower triangular part of the input matrix (excluding the diagonal) contains L. The diagonal elements of L are unity, and are not stored. The permutation matrix P is encoded in the permutation P. The j-th column of the matrix P is given by the k-th column of the identity matrix, where k = p_j the j-th element of the permutation vector. The sign of the permutation is given by SIGNUM. It has the value (-1)^n, where n is the number of interchanges in the permutation. The algorithm used in the decomposition is Gaussian Elimination with partial pivoting (Golub & Van Loan, `Matrix Computations', Algorithm 3.4.1). -- Function: int gsl_linalg_LU_solve (const gsl_matrix * LU, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X) -- Function: int gsl_linalg_complex_LU_solve (const gsl_matrix_complex * LU, const gsl_permutation * P, const gsl_vector_complex * B, gsl_vector_complex * X) These functions solve the square system A x = b using the LU decomposition of A into (LU, P) given by `gsl_linalg_LU_decomp' or `gsl_linalg_complex_LU_decomp' as input. -- Function: int gsl_linalg_LU_svx (const gsl_matrix * LU, const gsl_permutation * P, gsl_vector * X) -- Function: int gsl_linalg_complex_LU_svx (const gsl_matrix_complex * LU, const gsl_permutation * P, gsl_vector_complex * X) These functions solve the square system A x = b in-place using the precomputed LU decomposition of A into (LU,P). On input X should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_LU_refine (const gsl_matrix * A, const gsl_matrix * LU, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X, gsl_vector * RESIDUAL) -- Function: int gsl_linalg_complex_LU_refine (const gsl_matrix_complex * A, const gsl_matrix_complex * LU, const gsl_permutation * P, const gsl_vector_complex * B, gsl_vector_complex * X, gsl_vector_complex * RESIDUAL) These functions apply an iterative improvement to X, the solution of A x = b, from the precomputed LU decomposition of A into (LU,P). The initial residual r = A x - b is also computed and stored in RESIDUAL. -- Function: int gsl_linalg_LU_invert (const gsl_matrix * LU, const gsl_permutation * P, gsl_matrix * INVERSE) -- Function: int gsl_linalg_complex_LU_invert (const gsl_matrix_complex * LU, const gsl_permutation * P, gsl_matrix_complex * INVERSE) These functions compute the inverse of a matrix A from its LU decomposition (LU,P), storing the result in the matrix INVERSE. The inverse is computed by solving the system A x = b for each column of the identity matrix. It is preferable to avoid direct use of the inverse whenever possible, as the linear solver functions can obtain the same result more efficiently and reliably (consult any introductory textbook on numerical linear algebra for details). -- Function: double gsl_linalg_LU_det (gsl_matrix * LU, int SIGNUM) -- Function: gsl_complex gsl_linalg_complex_LU_det (gsl_matrix_complex * LU, int SIGNUM) These functions compute the determinant of a matrix A from its LU decomposition, LU. The determinant is computed as the product of the diagonal elements of U and the sign of the row permutation SIGNUM. -- Function: double gsl_linalg_LU_lndet (gsl_matrix * LU) -- Function: double gsl_linalg_complex_LU_lndet (gsl_matrix_complex * LU) These functions compute the logarithm of the absolute value of the determinant of a matrix A, \ln|\det(A)|, from its LU decomposition, LU. This function may be useful if the direct computation of the determinant would overflow or underflow. -- Function: int gsl_linalg_LU_sgndet (gsl_matrix * LU, int SIGNUM) -- Function: gsl_complex gsl_linalg_complex_LU_sgndet (gsl_matrix_complex * LU, int SIGNUM) These functions compute the sign or phase factor of the determinant of a matrix A, \det(A)/|\det(A)|, from its LU decomposition, LU.  File: gsl-ref.info, Node: QR Decomposition, Next: QR Decomposition with Column Pivoting, Prev: LU Decomposition, Up: Linear Algebra 14.2 QR Decomposition ===================== A general rectangular M-by-N matrix A has a QR decomposition into the product of an orthogonal M-by-M square matrix Q (where Q^T Q = I) and an M-by-N right-triangular matrix R, A = Q R This decomposition can be used to convert the linear system A x = b into the triangular system R x = Q^T b, which can be solved by back-substitution. Another use of the QR decomposition is to compute an orthonormal basis for a set of vectors. The first N columns of Q form an orthonormal basis for the range of A, ran(A), when A has full column rank. -- Function: int gsl_linalg_QR_decomp (gsl_matrix * A, gsl_vector * TAU) This function factorizes the M-by-N matrix A into the QR decomposition A = Q R. On output the diagonal and upper triangular part of the input matrix contain the matrix R. The vector TAU and the columns of the lower triangular part of the matrix A contain the Householder coefficients and Householder vectors which encode the orthogonal matrix Q. The vector TAU must be of length k=\min(M,N). The matrix Q is related to these components by, Q = Q_k ... Q_2 Q_1 where Q_i = I - \tau_i v_i v_i^T and v_i is the Householder vector v_i = (0,...,1,A(i+1,i),A(i+2,i),...,A(m,i)). This is the same storage scheme as used by LAPACK. The algorithm used to perform the decomposition is Householder QR (Golub & Van Loan, `Matrix Computations', Algorithm 5.2.1). -- Function: int gsl_linalg_QR_solve (const gsl_matrix * QR, const gsl_vector * TAU, const gsl_vector * B, gsl_vector * X) This function solves the square system A x = b using the QR decomposition of A held in (QR, TAU) which must have been computed previously with `gsl_linalg_QR_decomp'. The least-squares solution for rectangular systems can be found using `gsl_linalg_QR_lssolve'. -- Function: int gsl_linalg_QR_svx (const gsl_matrix * QR, const gsl_vector * TAU, gsl_vector * X) This function solves the square system A x = b in-place using the QR decomposition of A held in (QR,TAU) which must have been computed previously by `gsl_linalg_QR_decomp'. On input X should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_QR_lssolve (const gsl_matrix * QR, const gsl_vector * TAU, const gsl_vector * B, gsl_vector * X, gsl_vector * RESIDUAL) This function finds the least squares solution to the overdetermined system A x = b where the matrix A has more rows than columns. The least squares solution minimizes the Euclidean norm of the residual, ||Ax - b||.The routine requires as input the QR decomposition of A into (QR, TAU) given by `gsl_linalg_QR_decomp'. The solution is returned in X. The residual is computed as a by-product and stored in RESIDUAL. -- Function: int gsl_linalg_QR_QTvec (const gsl_matrix * QR, const gsl_vector * TAU, gsl_vector * V) This function applies the matrix Q^T encoded in the decomposition (QR,TAU) to the vector V, storing the result Q^T v in V. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T. -- Function: int gsl_linalg_QR_Qvec (const gsl_matrix * QR, const gsl_vector * TAU, gsl_vector * V) This function applies the matrix Q encoded in the decomposition (QR,TAU) to the vector V, storing the result Q v in V. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q. -- Function: int gsl_linalg_QR_QTmat (const gsl_matrix * QR, const gsl_vector * TAU, gsl_matrix * A) This function applies the matrix Q^T encoded in the decomposition (QR,TAU) to the matrix A, storing the result Q^T A in A. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T. -- Function: int gsl_linalg_QR_Rsolve (const gsl_matrix * QR, const gsl_vector * B, gsl_vector * X) This function solves the triangular system R x = b for X. It may be useful if the product b' = Q^T b has already been computed using `gsl_linalg_QR_QTvec'. -- Function: int gsl_linalg_QR_Rsvx (const gsl_matrix * QR, gsl_vector * X) This function solves the triangular system R x = b for X in-place. On input X should contain the right-hand side b and is replaced by the solution on output. This function may be useful if the product b' = Q^T b has already been computed using `gsl_linalg_QR_QTvec'. -- Function: int gsl_linalg_QR_unpack (const gsl_matrix * QR, const gsl_vector * TAU, gsl_matrix * Q, gsl_matrix * R) This function unpacks the encoded QR decomposition (QR,TAU) into the matrices Q and R, where Q is M-by-M and R is M-by-N. -- Function: int gsl_linalg_QR_QRsolve (gsl_matrix * Q, gsl_matrix * R, const gsl_vector * B, gsl_vector * X) This function solves the system R x = Q^T b for X. It can be used when the QR decomposition of a matrix is available in unpacked form as (Q, R). -- Function: int gsl_linalg_QR_update (gsl_matrix * Q, gsl_matrix * R, gsl_vector * W, const gsl_vector * V) This function performs a rank-1 update w v^T of the QR decomposition (Q, R). The update is given by Q'R' = Q (R + w v^T) where the output matrices Q' and R' are also orthogonal and right triangular. Note that W is destroyed by the update. -- Function: int gsl_linalg_R_solve (const gsl_matrix * R, const gsl_vector * B, gsl_vector * X) This function solves the triangular system R x = b for the N-by-N matrix R. -- Function: int gsl_linalg_R_svx (const gsl_matrix * R, gsl_vector * X) This function solves the triangular system R x = b in-place. On input X should contain the right-hand side b, which is replaced by the solution on output.  File: gsl-ref.info, Node: QR Decomposition with Column Pivoting, Next: Singular Value Decomposition, Prev: QR Decomposition, Up: Linear Algebra 14.3 QR Decomposition with Column Pivoting ========================================== The QR decomposition can be extended to the rank deficient case by introducing a column permutation P, A P = Q R The first r columns of Q form an orthonormal basis for the range of A for a matrix with column rank r. This decomposition can also be used to convert the linear system A x = b into the triangular system R y = Q^T b, x = P y, which can be solved by back-substitution and permutation. We denote the QR decomposition with column pivoting by QRP^T since A = Q R P^T. -- Function: int gsl_linalg_QRPT_decomp (gsl_matrix * A, gsl_vector * TAU, gsl_permutation * P, int * SIGNUM, gsl_vector * NORM) This function factorizes the M-by-N matrix A into the QRP^T decomposition A = Q R P^T. On output the diagonal and upper triangular part of the input matrix contain the matrix R. The permutation matrix P is stored in the permutation P. The sign of the permutation is given by SIGNUM. It has the value (-1)^n, where n is the number of interchanges in the permutation. The vector TAU and the columns of the lower triangular part of the matrix A contain the Householder coefficients and vectors which encode the orthogonal matrix Q. The vector TAU must be of length k=\min(M,N). The matrix Q is related to these components by, Q = Q_k ... Q_2 Q_1 where Q_i = I - \tau_i v_i v_i^T and v_i is the Householder vector v_i = (0,...,1,A(i+1,i),A(i+2,i),...,A(m,i)). This is the same storage scheme as used by LAPACK. The vector NORM is a workspace of length N used for column pivoting. The algorithm used to perform the decomposition is Householder QR with column pivoting (Golub & Van Loan, `Matrix Computations', Algorithm 5.4.1). -- Function: int gsl_linalg_QRPT_decomp2 (const gsl_matrix * A, gsl_matrix * Q, gsl_matrix * R, gsl_vector * TAU, gsl_permutation * P, int * SIGNUM, gsl_vector * NORM) This function factorizes the matrix A into the decomposition A = Q R P^T without modifying A itself and storing the output in the separate matrices Q and R. -- Function: int gsl_linalg_QRPT_solve (const gsl_matrix * QR, const gsl_vector * TAU, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X) This function solves the square system A x = b using the QRP^T decomposition of A held in (QR, TAU, P) which must have been computed previously by `gsl_linalg_QRPT_decomp'. -- Function: int gsl_linalg_QRPT_svx (const gsl_matrix * QR, const gsl_vector * TAU, const gsl_permutation * P, gsl_vector * X) This function solves the square system A x = b in-place using the QRP^T decomposition of A held in (QR,TAU,P). On input X should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_QRPT_QRsolve (const gsl_matrix * Q, const gsl_matrix * R, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X) This function solves the square system R P^T x = Q^T b for X. It can be used when the QR decomposition of a matrix is available in unpacked form as (Q, R). -- Function: int gsl_linalg_QRPT_update (gsl_matrix * Q, gsl_matrix * R, const gsl_permutation * P, gsl_vector * W, const gsl_vector * V) This function performs a rank-1 update w v^T of the QRP^T decomposition (Q, R, P). The update is given by Q'R' = Q (R + w v^T P) where the output matrices Q' and R' are also orthogonal and right triangular. Note that W is destroyed by the update. The permutation P is not changed. -- Function: int gsl_linalg_QRPT_Rsolve (const gsl_matrix * QR, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X) This function solves the triangular system R P^T x = b for the N-by-N matrix R contained in QR. -- Function: int gsl_linalg_QRPT_Rsvx (const gsl_matrix * QR, const gsl_permutation * P, gsl_vector * X) This function solves the triangular system R P^T x = b in-place for the N-by-N matrix R contained in QR. On input X should contain the right-hand side b, which is replaced by the solution on output.  File: gsl-ref.info, Node: Singular Value Decomposition, Next: Cholesky Decomposition, Prev: QR Decomposition with Column Pivoting, Up: Linear Algebra 14.4 Singular Value Decomposition ================================= A general rectangular M-by-N matrix A has a singular value decomposition (SVD) into the product of an M-by-N orthogonal matrix U, an N-by-N diagonal matrix of singular values S and the transpose of an N-by-N orthogonal square matrix V, A = U S V^T The singular values \sigma_i = S_{ii} are all non-negative and are generally chosen to form a non-increasing sequence \sigma_1 >= \sigma_2 >= ... >= \sigma_N >= 0. The singular value decomposition of a matrix has many practical uses. The condition number of the matrix is given by the ratio of the largest singular value to the smallest singular value. The presence of a zero singular value indicates that the matrix is singular. The number of non-zero singular values indicates the rank of the matrix. In practice singular value decomposition of a rank-deficient matrix will not produce exact zeroes for singular values, due to finite numerical precision. Small singular values should be edited by choosing a suitable tolerance. For a rank-deficient matrix, the null space of A is given by the columns of V corresponding to the zero singular values. Similarly, the range of A is given by columns of U corresponding to the non-zero singular values. Note that the routines here compute the "thin" version of the SVD with U as M-by-N orthogonal matrix. This allows in-place computation and is the most commonly-used form in practice. Mathematically, the "full" SVD is defined with U as an M-by-M orthogonal matrix and S as an M-by-N diagonal matrix (with additional rows of zeros). -- Function: int gsl_linalg_SV_decomp (gsl_matrix * A, gsl_matrix * V, gsl_vector * S, gsl_vector * WORK) This function factorizes the M-by-N matrix A into the singular value decomposition A = U S V^T for M >= N. On output the matrix A is replaced by U. The diagonal elements of the singular value matrix S are stored in the vector S. The singular values are non-negative and form a non-increasing sequence from S_1 to S_N. The matrix V contains the elements of V in untransposed form. To form the product U S V^T it is necessary to take the transpose of V. A workspace of length N is required in WORK. This routine uses the Golub-Reinsch SVD algorithm. -- Function: int gsl_linalg_SV_decomp_mod (gsl_matrix * A, gsl_matrix * X, gsl_matrix * V, gsl_vector * S, gsl_vector * WORK) This function computes the SVD using the modified Golub-Reinsch algorithm, which is faster for M>>N. It requires the vector WORK of length N and the N-by-N matrix X as additional working space. -- Function: int gsl_linalg_SV_decomp_jacobi (gsl_matrix * A, gsl_matrix * V, gsl_vector * S) This function computes the SVD of the M-by-N matrix A using one-sided Jacobi orthogonalization for M >= N. The Jacobi method can compute singular values to higher relative accuracy than Golub-Reinsch algorithms (see references for details). -- Function: int gsl_linalg_SV_solve (const gsl_matrix * U, const gsl_matrix * V, const gsl_vector * S, const gsl_vector * B, gsl_vector * X) This function solves the system A x = b using the singular value decomposition (U, S, V) of A which must have been computed previously with `gsl_linalg_SV_decomp'. Only non-zero singular values are used in computing the solution. The parts of the solution corresponding to singular values of zero are ignored. Other singular values can be edited out by setting them to zero before calling this function. In the over-determined case where A has more rows than columns the system is solved in the least squares sense, returning the solution X which minimizes ||A x - b||_2.  File: gsl-ref.info, Node: Cholesky Decomposition, Next: Tridiagonal Decomposition of Real Symmetric Matrices, Prev: Singular Value Decomposition, Up: Linear Algebra 14.5 Cholesky Decomposition =========================== A symmetric, positive definite square matrix A has a Cholesky decomposition into a product of a lower triangular matrix L and its transpose L^T, A = L L^T This is sometimes referred to as taking the square-root of a matrix. The Cholesky decomposition can only be carried out when all the eigenvalues of the matrix are positive. This decomposition can be used to convert the linear system A x = b into a pair of triangular systems (L y = b, L^T x = y), which can be solved by forward and back-substitution. -- Function: int gsl_linalg_cholesky_decomp (gsl_matrix * A) -- Function: int gsl_linalg_complex_cholesky_decomp (gsl_matrix_complex * A) These functions factorize the symmetric, positive-definite square matrix A into the Cholesky decomposition A = L L^T (or A = L L^H for the complex case). On input, the values from the diagonal and lower-triangular part of the matrix A are used (the upper triangular part is ignored). On output the diagonal and lower triangular part of the input matrix A contain the matrix L, while the upper triangular part of the input matrix is overwritten with L^T (the diagonal terms being identical for both L and L^T). If the matrix is not positive-definite then the decomposition will fail, returning the error code `GSL_EDOM'. When testing whether a matrix is positive-definite, disable the error handler first to avoid triggering an error. -- Function: int gsl_linalg_cholesky_solve (const gsl_matrix * CHOLESKY, const gsl_vector * B, gsl_vector * X) -- Function: int gsl_linalg_complex_cholesky_solve (const gsl_matrix_complex * CHOLESKY, const gsl_vector_complex * B, gsl_vector_complex * X) These functions solve the system A x = b using the Cholesky decomposition of A held in the matrix CHOLESKY which must have been previously computed by `gsl_linalg_cholesky_decomp' or `gsl_linalg_complex_cholesky_decomp'. -- Function: int gsl_linalg_cholesky_svx (const gsl_matrix * CHOLESKY, gsl_vector * X) -- Function: int gsl_linalg_complex_cholesky_svx (const gsl_matrix_complex * CHOLESKY, gsl_vector_complex * X) These functions solve the system A x = b in-place using the Cholesky decomposition of A held in the matrix CHOLESKY which must have been previously computed by `gsl_linalg_cholesky_decomp' or `gsl_linalg_complex_cholesky_decomp'. On input X should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_cholesky_invert (gsl_matrix * CHOLESKY) -- Function: int gsl_linalg_complex_cholesky_invert (gsl_matrix_complex * CHOLESKY) These functions compute the inverse of a matrix from its Cholesky decomposition CHOLESKY, which must have been previously computed by `gsl_linalg_cholesky_decomp' or `gsl_linalg_complex_cholesky_decomp'. On output, the inverse is stored in-place in CHOLESKY.  File: gsl-ref.info, Node: Tridiagonal Decomposition of Real Symmetric Matrices, Next: Tridiagonal Decomposition of Hermitian Matrices, Prev: Cholesky Decomposition, Up: Linear Algebra 14.6 Tridiagonal Decomposition of Real Symmetric Matrices ========================================================= A symmetric matrix A can be factorized by similarity transformations into the form, A = Q T Q^T where Q is an orthogonal matrix and T is a symmetric tridiagonal matrix. -- Function: int gsl_linalg_symmtd_decomp (gsl_matrix * A, gsl_vector * TAU) This function factorizes the symmetric square matrix A into the symmetric tridiagonal decomposition Q T Q^T. On output the diagonal and subdiagonal part of the input matrix A contain the tridiagonal matrix T. The remaining lower triangular part of the input matrix contains the Householder vectors which, together with the Householder coefficients TAU, encode the orthogonal matrix Q. This storage scheme is the same as used by LAPACK. The upper triangular part of A is not referenced. -- Function: int gsl_linalg_symmtd_unpack (const gsl_matrix * A, const gsl_vector * TAU, gsl_matrix * Q, gsl_vector * DIAG, gsl_vector * SUBDIAG) This function unpacks the encoded symmetric tridiagonal decomposition (A, TAU) obtained from `gsl_linalg_symmtd_decomp' into the orthogonal matrix Q, the vector of diagonal elements DIAG and the vector of subdiagonal elements SUBDIAG. -- Function: int gsl_linalg_symmtd_unpack_T (const gsl_matrix * A, gsl_vector * DIAG, gsl_vector * SUBDIAG) This function unpacks the diagonal and subdiagonal of the encoded symmetric tridiagonal decomposition (A, TAU) obtained from `gsl_linalg_symmtd_decomp' into the vectors DIAG and SUBDIAG.  File: gsl-ref.info, Node: Tridiagonal Decomposition of Hermitian Matrices, Next: Hessenberg Decomposition of Real Matrices, Prev: Tridiagonal Decomposition of Real Symmetric Matrices, Up: Linear Algebra 14.7 Tridiagonal Decomposition of Hermitian Matrices ==================================================== A hermitian matrix A can be factorized by similarity transformations into the form, A = U T U^T where U is a unitary matrix and T is a real symmetric tridiagonal matrix. -- Function: int gsl_linalg_hermtd_decomp (gsl_matrix_complex * A, gsl_vector_complex * TAU) This function factorizes the hermitian matrix A into the symmetric tridiagonal decomposition U T U^T. On output the real parts of the diagonal and subdiagonal part of the input matrix A contain the tridiagonal matrix T. The remaining lower triangular part of the input matrix contains the Householder vectors which, together with the Householder coefficients TAU, encode the unitary matrix U. This storage scheme is the same as used by LAPACK. The upper triangular part of A and imaginary parts of the diagonal are not referenced. -- Function: int gsl_linalg_hermtd_unpack (const gsl_matrix_complex * A, const gsl_vector_complex * TAU, gsl_matrix_complex * U, gsl_vector * DIAG, gsl_vector * SUBDIAG) This function unpacks the encoded tridiagonal decomposition (A, TAU) obtained from `gsl_linalg_hermtd_decomp' into the unitary matrix U, the real vector of diagonal elements DIAG and the real vector of subdiagonal elements SUBDIAG. -- Function: int gsl_linalg_hermtd_unpack_T (const gsl_matrix_complex * A, gsl_vector * DIAG, gsl_vector * SUBDIAG) This function unpacks the diagonal and subdiagonal of the encoded tridiagonal decomposition (A, TAU) obtained from the `gsl_linalg_hermtd_decomp' into the real vectors DIAG and SUBDIAG.  File: gsl-ref.info, Node: Hessenberg Decomposition of Real Matrices, Next: Hessenberg-Triangular Decomposition of Real Matrices, Prev: Tridiagonal Decomposition of Hermitian Matrices, Up: Linear Algebra 14.8 Hessenberg Decomposition of Real Matrices ============================================== A general real matrix A can be decomposed by orthogonal similarity transformations into the form A = U H U^T where U is orthogonal and H is an upper Hessenberg matrix, meaning that it has zeros below the first subdiagonal. The Hessenberg reduction is the first step in the Schur decomposition for the nonsymmetric eigenvalue problem, but has applications in other areas as well. -- Function: int gsl_linalg_hessenberg_decomp (gsl_matrix * A, gsl_vector * TAU) This function computes the Hessenberg decomposition of the matrix A by applying the similarity transformation H = U^T A U. On output, H is stored in the upper portion of A. The information required to construct the matrix U is stored in the lower triangular portion of A. U is a product of N - 2 Householder matrices. The Householder vectors are stored in the lower portion of A (below the subdiagonal) and the Householder coefficients are stored in the vector TAU. TAU must be of length N. -- Function: int gsl_linalg_hessenberg_unpack (gsl_matrix * H, gsl_vector * TAU, gsl_matrix * U) This function constructs the orthogonal matrix U from the information stored in the Hessenberg matrix H along with the vector TAU. H and TAU are outputs from `gsl_linalg_hessenberg_decomp'. -- Function: int gsl_linalg_hessenberg_unpack_accum (gsl_matrix * H, gsl_vector * TAU, gsl_matrix * V) This function is similar to `gsl_linalg_hessenberg_unpack', except it accumulates the matrix U into V, so that V' = VU. The matrix V must be initialized prior to calling this function. Setting V to the identity matrix provides the same result as `gsl_linalg_hessenberg_unpack'. If H is order N, then V must have N columns but may have any number of rows. -- Function: int gsl_linalg_hessenberg_set_zero (gsl_matrix * H) This function sets the lower triangular portion of H, below the subdiagonal, to zero. It is useful for clearing out the Householder vectors after calling `gsl_linalg_hessenberg_decomp'.  File: gsl-ref.info, Node: Hessenberg-Triangular Decomposition of Real Matrices, Next: Bidiagonalization, Prev: Hessenberg Decomposition of Real Matrices, Up: Linear Algebra 14.9 Hessenberg-Triangular Decomposition of Real Matrices ========================================================= A general real matrix pair (A, B) can be decomposed by orthogonal similarity transformations into the form A = U H V^T B = U R V^T where U and V are orthogonal, H is an upper Hessenberg matrix, and R is upper triangular. The Hessenberg-Triangular reduction is the first step in the generalized Schur decomposition for the generalized eigenvalue problem. -- Function: int gsl_linalg_hesstri_decomp (gsl_matrix * A, gsl_matrix * B, gsl_matrix * U, gsl_matrix * V, gsl_vector * WORK) This function computes the Hessenberg-Triangular decomposition of the matrix pair (A, B). On output, H is stored in A, and R is stored in B. If U and V are provided (they may be null), the similarity transformations are stored in them. Additional workspace of length N is needed in WORK.  File: gsl-ref.info, Node: Bidiagonalization, Next: Householder Transformations, Prev: Hessenberg-Triangular Decomposition of Real Matrices, Up: Linear Algebra 14.10 Bidiagonalization ======================= A general matrix A can be factorized by similarity transformations into the form, A = U B V^T where U and V are orthogonal matrices and B is a N-by-N bidiagonal matrix with non-zero entries only on the diagonal and superdiagonal. The size of U is M-by-N and the size of V is N-by-N. -- Function: int gsl_linalg_bidiag_decomp (gsl_matrix * A, gsl_vector * TAU_U, gsl_vector * TAU_V) This function factorizes the M-by-N matrix A into bidiagonal form U B V^T. The diagonal and superdiagonal of the matrix B are stored in the diagonal and superdiagonal of A. The orthogonal matrices U and V are stored as compressed Householder vectors in the remaining elements of A. The Householder coefficients are stored in the vectors TAU_U and TAU_V. The length of TAU_U must equal the number of elements in the diagonal of A and the length of TAU_V should be one element shorter. -- Function: int gsl_linalg_bidiag_unpack (const gsl_matrix * A, const gsl_vector * TAU_U, gsl_matrix * U, const gsl_vector * TAU_V, gsl_matrix * V, gsl_vector * DIAG, gsl_vector * SUPERDIAG) This function unpacks the bidiagonal decomposition of A produced by `gsl_linalg_bidiag_decomp', (A, TAU_U, TAU_V) into the separate orthogonal matrices U, V and the diagonal vector DIAG and superdiagonal SUPERDIAG. Note that U is stored as a compact M-by-N orthogonal matrix satisfying U^T U = I for efficiency. -- Function: int gsl_linalg_bidiag_unpack2 (gsl_matrix * A, gsl_vector * TAU_U, gsl_vector * TAU_V, gsl_matrix * V) This function unpacks the bidiagonal decomposition of A produced by `gsl_linalg_bidiag_decomp', (A, TAU_U, TAU_V) into the separate orthogonal matrices U, V and the diagonal vector DIAG and superdiagonal SUPERDIAG. The matrix U is stored in-place in A. -- Function: int gsl_linalg_bidiag_unpack_B (const gsl_matrix * A, gsl_vector * DIAG, gsl_vector * SUPERDIAG) This function unpacks the diagonal and superdiagonal of the bidiagonal decomposition of A from `gsl_linalg_bidiag_decomp', into the diagonal vector DIAG and superdiagonal vector SUPERDIAG.  File: gsl-ref.info, Node: Householder Transformations, Next: Householder solver for linear systems, Prev: Bidiagonalization, Up: Linear Algebra 14.11 Householder Transformations ================================= A Householder transformation is a rank-1 modification of the identity matrix which can be used to zero out selected elements of a vector. A Householder matrix P takes the form, P = I - \tau v v^T where v is a vector (called the "Householder vector") and \tau = 2/(v^T v). The functions described in this section use the rank-1 structure of the Householder matrix to create and apply Householder transformations efficiently. -- Function: double gsl_linalg_householder_transform (gsl_vector * V) -- Function: gsl_complex gsl_linalg_complex_householder_transform (gsl_vector_complex * V) This function prepares a Householder transformation P = I - \tau v v^T which can be used to zero all the elements of the input vector except the first. On output the transformation is stored in the vector V and the scalar \tau is returned. -- Function: int gsl_linalg_householder_hm (double TAU, const gsl_vector * V, gsl_matrix * A) -- Function: int gsl_linalg_complex_householder_hm (gsl_complex TAU, const gsl_vector_complex * V, gsl_matrix_complex * A) This function applies the Householder matrix P defined by the scalar TAU and the vector V to the left-hand side of the matrix A. On output the result P A is stored in A. -- Function: int gsl_linalg_householder_mh (double TAU, const gsl_vector * V, gsl_matrix * A) -- Function: int gsl_linalg_complex_householder_mh (gsl_complex TAU, const gsl_vector_complex * V, gsl_matrix_complex * A) This function applies the Householder matrix P defined by the scalar TAU and the vector V to the right-hand side of the matrix A. On output the result A P is stored in A. -- Function: int gsl_linalg_householder_hv (double TAU, const gsl_vector * V, gsl_vector * W) -- Function: int gsl_linalg_complex_householder_hv (gsl_complex TAU, const gsl_vector_complex * V, gsl_vector_complex * W) This function applies the Householder transformation P defined by the scalar TAU and the vector V to the vector W. On output the result P w is stored in W.  File: gsl-ref.info, Node: Householder solver for linear systems, Next: Tridiagonal Systems, Prev: Householder Transformations, Up: Linear Algebra 14.12 Householder solver for linear systems =========================================== -- Function: int gsl_linalg_HH_solve (gsl_matrix * A, const gsl_vector * B, gsl_vector * X) This function solves the system A x = b directly using Householder transformations. On output the solution is stored in X and B is not modified. The matrix A is destroyed by the Householder transformations. -- Function: int gsl_linalg_HH_svx (gsl_matrix * A, gsl_vector * X) This function solves the system A x = b in-place using Householder transformations. On input X should contain the right-hand side b, which is replaced by the solution on output. The matrix A is destroyed by the Householder transformations.  File: gsl-ref.info, Node: Tridiagonal Systems, Next: Balancing, Prev: Householder solver for linear systems, Up: Linear Algebra 14.13 Tridiagonal Systems ========================= The functions described in this section efficiently solve symmetric, non-symmetric and cyclic tridiagonal systems with minimal storage. Note that the current implementations of these functions use a variant of Cholesky decomposition, so the tridiagonal matrix must be positive definite. For non-positive definite matrices, the functions return the error code `GSL_ESING'. -- Function: int gsl_linalg_solve_tridiag (const gsl_vector * DIAG, const gsl_vector * E, const gsl_vector * F, const gsl_vector * B, gsl_vector * X) This function solves the general N-by-N system A x = b where A is tridiagonal (N >= 2). The super-diagonal and sub-diagonal vectors E and F must be one element shorter than the diagonal vector DIAG. The form of A for the 4-by-4 case is shown below, A = ( d_0 e_0 0 0 ) ( f_0 d_1 e_1 0 ) ( 0 f_1 d_2 e_2 ) ( 0 0 f_2 d_3 ) -- Function: int gsl_linalg_solve_symm_tridiag (const gsl_vector * DIAG, const gsl_vector * E, const gsl_vector * B, gsl_vector * X) This function solves the general N-by-N system A x = b where A is symmetric tridiagonal (N >= 2). The off-diagonal vector E must be one element shorter than the diagonal vector DIAG. The form of A for the 4-by-4 case is shown below, A = ( d_0 e_0 0 0 ) ( e_0 d_1 e_1 0 ) ( 0 e_1 d_2 e_2 ) ( 0 0 e_2 d_3 ) -- Function: int gsl_linalg_solve_cyc_tridiag (const gsl_vector * DIAG, const gsl_vector * E, const gsl_vector * F, const gsl_vector * B, gsl_vector * X) This function solves the general N-by-N system A x = b where A is cyclic tridiagonal (N >= 3). The cyclic super-diagonal and sub-diagonal vectors E and F must have the same number of elements as the diagonal vector DIAG. The form of A for the 4-by-4 case is shown below, A = ( d_0 e_0 0 f_3 ) ( f_0 d_1 e_1 0 ) ( 0 f_1 d_2 e_2 ) ( e_3 0 f_2 d_3 ) -- Function: int gsl_linalg_solve_symm_cyc_tridiag (const gsl_vector * DIAG, const gsl_vector * E, const gsl_vector * B, gsl_vector * X) This function solves the general N-by-N system A x = b where A is symmetric cyclic tridiagonal (N >= 3). The cyclic off-diagonal vector E must have the same number of elements as the diagonal vector DIAG. The form of A for the 4-by-4 case is shown below, A = ( d_0 e_0 0 e_3 ) ( e_0 d_1 e_1 0 ) ( 0 e_1 d_2 e_2 ) ( e_3 0 e_2 d_3 )  File: gsl-ref.info, Node: Balancing, Next: Linear Algebra Examples, Prev: Tridiagonal Systems, Up: Linear Algebra 14.14 Balancing =============== The process of balancing a matrix applies similarity transformations to make the rows and columns have comparable norms. This is useful, for example, to reduce roundoff errors in the solution of eigenvalue problems. Balancing a matrix A consists of replacing A with a similar matrix A' = D^(-1) A D where D is a diagonal matrix whose entries are powers of the floating point radix. -- Function: int gsl_linalg_balance_matrix (gsl_matrix * A, gsl_vector * D) This function replaces the matrix A with its balanced counterpart and stores the diagonal elements of the similarity transformation into the vector D.  File: gsl-ref.info, Node: Linear Algebra Examples, Next: Linear Algebra References and Further Reading, Prev: Balancing, Up: Linear Algebra 14.15 Examples ============== The following program solves the linear system A x = b. The system to be solved is, [ 0.18 0.60 0.57 0.96 ] [x0] [1.0] [ 0.41 0.24 0.99 0.58 ] [x1] = [2.0] [ 0.14 0.30 0.97 0.66 ] [x2] [3.0] [ 0.51 0.13 0.19 0.85 ] [x3] [4.0] and the solution is found using LU decomposition of the matrix A. #include #include int main (void) { double a_data[] = { 0.18, 0.60, 0.57, 0.96, 0.41, 0.24, 0.99, 0.58, 0.14, 0.30, 0.97, 0.66, 0.51, 0.13, 0.19, 0.85 }; double b_data[] = { 1.0, 2.0, 3.0, 4.0 }; gsl_matrix_view m = gsl_matrix_view_array (a_data, 4, 4); gsl_vector_view b = gsl_vector_view_array (b_data, 4); gsl_vector *x = gsl_vector_alloc (4); int s; gsl_permutation * p = gsl_permutation_alloc (4); gsl_linalg_LU_decomp (&m.matrix, p, &s); gsl_linalg_LU_solve (&m.matrix, p, &b.vector, x); printf ("x = \n"); gsl_vector_fprintf (stdout, x, "%g"); gsl_permutation_free (p); gsl_vector_free (x); return 0; } Here is the output from the program, x = -4.05205 -12.6056 1.66091 8.69377 This can be verified by multiplying the solution x by the original matrix A using GNU OCTAVE, octave> A = [ 0.18, 0.60, 0.57, 0.96; 0.41, 0.24, 0.99, 0.58; 0.14, 0.30, 0.97, 0.66; 0.51, 0.13, 0.19, 0.85 ]; octave> x = [ -4.05205; -12.6056; 1.66091; 8.69377]; octave> A * x ans = 1.0000 2.0000 3.0000 4.0000 This reproduces the original right-hand side vector, b, in accordance with the equation A x = b.  File: gsl-ref.info, Node: Linear Algebra References and Further Reading, Prev: Linear Algebra Examples, Up: Linear Algebra 14.16 References and Further Reading ==================================== Further information on the algorithms described in this section can be found in the following book, G. H. Golub, C. F. Van Loan, `Matrix Computations' (3rd Ed, 1996), Johns Hopkins University Press, ISBN 0-8018-5414-8. The LAPACK library is described in the following manual, `LAPACK Users' Guide' (Third Edition, 1999), Published by SIAM, ISBN 0-89871-447-8. `http://www.netlib.org/lapack' The LAPACK source code can be found at the website above, along with an online copy of the users guide. The Modified Golub-Reinsch algorithm is described in the following paper, T.F. Chan, "An Improved Algorithm for Computing the Singular Value Decomposition", `ACM Transactions on Mathematical Software', 8 (1982), pp 72-83. The Jacobi algorithm for singular value decomposition is described in the following papers, J.C. Nash, "A one-sided transformation method for the singular value decomposition and algebraic eigenproblem", `Computer Journal', Volume 18, Number 1 (1975), p 74-76 J.C. Nash and S. Shlien "Simple algorithms for the partial singular value decomposition", `Computer Journal', Volume 30 (1987), p 268-275. James Demmel, Kresn_evals'. The converged eigenvalues are stored in the beginning of EVAL. -- Function: int gsl_eigen_nonsymm_Z (gsl_matrix * A, gsl_vector_complex * EVAL, gsl_matrix * Z, gsl_eigen_nonsymm_workspace * W) This function is identical to `gsl_eigen_nonsymm' except that it also computes the Schur vectors and stores them into Z. -- Function: gsl_eigen_nonsymmv_workspace * gsl_eigen_nonsymmv_alloc (const size_t N) This function allocates a workspace for computing eigenvalues and eigenvectors of N-by-N real nonsymmetric matrices. The size of the workspace is O(5n). -- Function: void gsl_eigen_nonsymmv_free (gsl_eigen_nonsymmv_workspace * W) This function frees the memory associated with the workspace W. -- Function: void gsl_eigen_nonsymmv_params (const int BALANCE, gsl_eigen_nonsymm_workspace * W) This function sets parameters which determine how the eigenvalue problem is solved in subsequent calls to `gsl_eigen_nonsymmv'. If BALANCE is set to 1, a balancing transformation is applied to the matrix. See `gsl_eigen_nonsymm_params' for more information. Balancing is turned off by default since it does not preserve the orthogonality of the Schur vectors. -- Function: int gsl_eigen_nonsymmv (gsl_matrix * A, gsl_vector_complex * EVAL, gsl_matrix_complex * EVEC, gsl_eigen_nonsymmv_workspace * W) This function computes eigenvalues and right eigenvectors of the N-by-N real nonsymmetric matrix A. It first calls `gsl_eigen_nonsymm' to compute the eigenvalues, Schur form T, and Schur vectors. Then it finds eigenvectors of T and backtransforms them using the Schur vectors. The Schur vectors are destroyed in the process, but can be saved by using `gsl_eigen_nonsymmv_Z'. The computed eigenvectors are normalized to have unit magnitude. On output, the upper portion of A contains the Schur form T. If `gsl_eigen_nonsymm' fails, no eigenvectors are computed, and an error code is returned. -- Function: int gsl_eigen_nonsymmv_Z (gsl_matrix * A, gsl_vector_complex * EVAL, gsl_matrix_complex * EVEC, gsl_matrix * Z, gsl_eigen_nonsymmv_workspace * W) This function is identical to `gsl_eigen_nonsymmv' except that it also saves the Schur vectors into Z.  File: gsl-ref.info, Node: Real Generalized Symmetric-Definite Eigensystems, Next: Complex Generalized Hermitian-Definite Eigensystems, Prev: Real Nonsymmetric Matrices, Up: Eigensystems 15.4 Real Generalized Symmetric-Definite Eigensystems ===================================================== The real generalized symmetric-definite eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x where A and B are symmetric matrices, and B is positive-definite. This problem reduces to the standard symmetric eigenvalue problem by applying the Cholesky decomposition to B: A x = \lambda B x A x = \lambda L L^t x ( L^{-1} A L^{-t} ) L^t x = \lambda L^t x Therefore, the problem becomes C y = \lambda y where C = L^{-1} A L^{-t} is symmetric, and y = L^t x. The standard symmetric eigensolver can be applied to the matrix C. The resulting eigenvectors are backtransformed to find the vectors of the original problem. The eigenvalues and eigenvectors of the generalized symmetric-definite eigenproblem are always real. -- Function: gsl_eigen_gensymm_workspace * gsl_eigen_gensymm_alloc (const size_t N) This function allocates a workspace for computing eigenvalues of N-by-N real generalized symmetric-definite eigensystems. The size of the workspace is O(2n). -- Function: void gsl_eigen_gensymm_free (gsl_eigen_gensymm_workspace * W) This function frees the memory associated with the workspace W. -- Function: int gsl_eigen_gensymm (gsl_matrix * A, gsl_matrix * B, gsl_vector * EVAL, gsl_eigen_gensymm_workspace * W) This function computes the eigenvalues of the real generalized symmetric-definite matrix pair (A, B), and stores them in EVAL, using the method outlined above. On output, B contains its Cholesky decomposition and A is destroyed. -- Function: gsl_eigen_gensymmv_workspace * gsl_eigen_gensymmv_alloc (const size_t N) This function allocates a workspace for computing eigenvalues and eigenvectors of N-by-N real generalized symmetric-definite eigensystems. The size of the workspace is O(4n). -- Function: void gsl_eigen_gensymmv_free (gsl_eigen_gensymmv_workspace * W) This function frees the memory associated with the workspace W. -- Function: int gsl_eigen_gensymmv (gsl_matrix * A, gsl_matrix * B, gsl_vector * EVAL, gsl_matrix * EVEC, gsl_eigen_gensymmv_workspace * W) This function computes the eigenvalues and eigenvectors of the real generalized symmetric-definite matrix pair (A, B), and stores them in EVAL and EVEC respectively. The computed eigenvectors are normalized to have unit magnitude. On output, B contains its Cholesky decomposition and A is destroyed.  File: gsl-ref.info, Node: Complex Generalized Hermitian-Definite Eigensystems, Next: Real Generalized Nonsymmetric Eigensystems, Prev: Real Generalized Symmetric-Definite Eigensystems, Up: Eigensystems 15.5 Complex Generalized Hermitian-Definite Eigensystems ======================================================== The complex generalized hermitian-definite eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x where A and B are hermitian matrices, and B is positive-definite. Similarly to the real case, this can be reduced to C y = \lambda y where C = L^{-1} A L^{-H} is hermitian, and y = L^H x. The standard hermitian eigensolver can be applied to the matrix C. The resulting eigenvectors are backtransformed to find the vectors of the original problem. The eigenvalues of the generalized hermitian-definite eigenproblem are always real. -- Function: gsl_eigen_genherm_workspace * gsl_eigen_genherm_alloc (const size_t N) This function allocates a workspace for computing eigenvalues of N-by-N complex generalized hermitian-definite eigensystems. The size of the workspace is O(3n). -- Function: void gsl_eigen_genherm_free (gsl_eigen_genherm_workspace * W) This function frees the memory associated with the workspace W. -- Function: int gsl_eigen_genherm (gsl_matrix_complex * A, gsl_matrix_complex * B, gsl_vector * EVAL, gsl_eigen_genherm_workspace * W) This function computes the eigenvalues of the complex generalized hermitian-definite matrix pair (A, B), and stores them in EVAL, using the method outlined above. On output, B contains its Cholesky decomposition and A is destroyed. -- Function: gsl_eigen_genhermv_workspace * gsl_eigen_genhermv_alloc (const size_t N) This function allocates a workspace for computing eigenvalues and eigenvectors of N-by-N complex generalized hermitian-definite eigensystems. The size of the workspace is O(5n). -- Function: void gsl_eigen_genhermv_free (gsl_eigen_genhermv_workspace * W) This function frees the memory associated with the workspace W. -- Function: int gsl_eigen_genhermv (gsl_matrix_complex * A, gsl_matrix_complex * B, gsl_vector * EVAL, gsl_matrix_complex * EVEC, gsl_eigen_genhermv_workspace * W) This function computes the eigenvalues and eigenvectors of the complex generalized hermitian-definite matrix pair (A, B), and stores them in EVAL and EVEC respectively. The computed eigenvectors are normalized to have unit magnitude. On output, B contains its Cholesky decomposition and A is destroyed.  File: gsl-ref.info, Node: Real Generalized Nonsymmetric Eigensystems, Next: Sorting Eigenvalues and Eigenvectors, Prev: Complex Generalized Hermitian-Definite Eigensystems, Up: Eigensystems 15.6 Real Generalized Nonsymmetric Eigensystems =============================================== Given two square matrices (A, B), the generalized nonsymmetric eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x We may also define the problem as finding eigenvalues \mu and eigenvectors y such that \mu A y = B y Note that these two problems are equivalent (with \lambda = 1/\mu) if neither \lambda nor \mu is zero. If say, \lambda is zero, then it is still a well defined eigenproblem, but its alternate problem involving \mu is not. Therefore, to allow for zero (and infinite) eigenvalues, the problem which is actually solved is \beta A x = \alpha B x The eigensolver routines below will return two values \alpha and \beta and leave it to the user to perform the divisions \lambda = \alpha / \beta and \mu = \beta / \alpha. If the determinant of the matrix pencil A - \lambda B is zero for all \lambda, the problem is said to be singular; otherwise it is called regular. Singularity normally leads to some \alpha = \beta = 0 which means the eigenproblem is ill-conditioned and generally does not have well defined eigenvalue solutions. The routines below are intended for regular matrix pencils and could yield unpredictable results when applied to singular pencils. The solution of the real generalized nonsymmetric eigensystem problem for a matrix pair (A, B) involves computing the generalized Schur decomposition A = Q S Z^T B = Q T Z^T where Q and Z are orthogonal matrices of left and right Schur vectors respectively, and (S, T) is the generalized Schur form whose diagonal elements give the \alpha and \beta values. The algorithm used is the QZ method due to Moler and Stewart (see references). -- Function: gsl_eigen_gen_workspace * gsl_eigen_gen_alloc (const size_t N) This function allocates a workspace for computing eigenvalues of N-by-N real generalized nonsymmetric eigensystems. The size of the workspace is O(n). -- Function: void gsl_eigen_gen_free (gsl_eigen_gen_workspace * W) This function frees the memory associated with the workspace W. -- Function: void gsl_eigen_gen_params (const int COMPUTE_S, const int COMPUTE_T, const int BALANCE, gsl_eigen_gen_workspace * W) This function sets some parameters which determine how the eigenvalue problem is solved in subsequent calls to `gsl_eigen_gen'. If COMPUTE_S is set to 1, the full Schur form S will be computed by `gsl_eigen_gen'. If it is set to 0, S will not be computed (this is the default setting). S is a quasi upper triangular matrix with 1-by-1 and 2-by-2 blocks on its diagonal. 1-by-1 blocks correspond to real eigenvalues, and 2-by-2 blocks correspond to complex eigenvalues. If COMPUTE_T is set to 1, the full Schur form T will be computed by `gsl_eigen_gen'. If it is set to 0, T will not be computed (this is the default setting). T is an upper triangular matrix with non-negative elements on its diagonal. Any 2-by-2 blocks in S will correspond to a 2-by-2 diagonal block in T. The BALANCE parameter is currently ignored, since generalized balancing is not yet implemented. -- Function: int gsl_eigen_gen (gsl_matrix * A, gsl_matrix * B, gsl_vector_complex * ALPHA, gsl_vector * BETA, gsl_eigen_gen_workspace * W) This function computes the eigenvalues of the real generalized nonsymmetric matrix pair (A, B), and stores them as pairs in (ALPHA, BETA), where ALPHA is complex and BETA is real. If \beta_i is non-zero, then \lambda = \alpha_i / \beta_i is an eigenvalue. Likewise, if \alpha_i is non-zero, then \mu = \beta_i / \alpha_i is an eigenvalue of the alternate problem \mu A y = B y. The elements of BETA are normalized to be non-negative. If S is desired, it is stored in A on output. If T is desired, it is stored in B on output. The ordering of eigenvalues in (ALPHA, BETA) follows the ordering of the diagonal blocks in the Schur forms S and T. In rare cases, this function may fail to find all eigenvalues. If this occurs, an error code is returned. -- Function: int gsl_eigen_gen_QZ (gsl_matrix * A, gsl_matrix * B, gsl_vector_complex * ALPHA, gsl_vector * BETA, gsl_matrix * Q, gsl_matrix * Z, gsl_eigen_gen_workspace * W) This function is identical to `gsl_eigen_gen' except that it also computes the left and right Schur vectors and stores them into Q and Z respectively. -- Function: gsl_eigen_genv_workspace * gsl_eigen_genv_alloc (const size_t N) This function allocates a workspace for computing eigenvalues and eigenvectors of N-by-N real generalized nonsymmetric eigensystems. The size of the workspace is O(7n). -- Function: void gsl_eigen_genv_free (gsl_eigen_genv_workspace * W) This function frees the memory associated with the workspace W. -- Function: int gsl_eigen_genv (gsl_matrix * A, gsl_matrix * B, gsl_vector_complex * ALPHA, gsl_vector * BETA, gsl_matrix_complex * EVEC, gsl_eigen_genv_workspace * W) This function computes eigenvalues and right eigenvectors of the N-by-N real generalized nonsymmetric matrix pair (A, B). The eigenvalues are stored in (ALPHA, BETA) and the eigenvectors are stored in EVEC. It first calls `gsl_eigen_gen' to compute the eigenvalues, Schur forms, and Schur vectors. Then it finds eigenvectors of the Schur forms and backtransforms them using the Schur vectors. The Schur vectors are destroyed in the process, but can be saved by using `gsl_eigen_genv_QZ'. The computed eigenvectors are normalized to have unit magnitude. On output, (A, B) contains the generalized Schur form (S, T). If `gsl_eigen_gen' fails, no eigenvectors are computed, and an error code is returned. -- Function: int gsl_eigen_genv_QZ (gsl_matrix * A, gsl_matrix * B, gsl_vector_complex * ALPHA, gsl_vector * BETA, gsl_matrix_complex * EVEC, gsl_matrix * Q, gsl_matrix * Z, gsl_eigen_genv_workspace * W) This function is identical to `gsl_eigen_genv' except that it also computes the left and right Schur vectors and stores them into Q and Z respectively.  File: gsl-ref.info, Node: Sorting Eigenvalues and Eigenvectors, Next: Eigenvalue and Eigenvector Examples, Prev: Real Generalized Nonsymmetric Eigensystems, Up: Eigensystems 15.7 Sorting Eigenvalues and Eigenvectors ========================================= -- Function: int gsl_eigen_symmv_sort (gsl_vector * EVAL, gsl_matrix * EVEC, gsl_eigen_sort_t SORT_TYPE) This function simultaneously sorts the eigenvalues stored in the vector EVAL and the corresponding real eigenvectors stored in the columns of the matrix EVEC into ascending or descending order according to the value of the parameter SORT_TYPE, `GSL_EIGEN_SORT_VAL_ASC' ascending order in numerical value `GSL_EIGEN_SORT_VAL_DESC' descending order in numerical value `GSL_EIGEN_SORT_ABS_ASC' ascending order in magnitude `GSL_EIGEN_SORT_ABS_DESC' descending order in magnitude -- Function: int gsl_eigen_hermv_sort (gsl_vector * EVAL, gsl_matrix_complex * EVEC, gsl_eigen_sort_t SORT_TYPE) This function simultaneously sorts the eigenvalues stored in the vector EVAL and the corresponding complex eigenvectors stored in the columns of the matrix EVEC into ascending or descending order according to the value of the parameter SORT_TYPE as shown above. -- Function: int gsl_eigen_nonsymmv_sort (gsl_vector_complex * EVAL, gsl_matrix_complex * EVEC, gsl_eigen_sort_t SORT_TYPE) This function simultaneously sorts the eigenvalues stored in the vector EVAL and the corresponding complex eigenvectors stored in the columns of the matrix EVEC into ascending or descending order according to the value of the parameter SORT_TYPE as shown above. Only `GSL_EIGEN_SORT_ABS_ASC' and `GSL_EIGEN_SORT_ABS_DESC' are supported due to the eigenvalues being complex. -- Function: int gsl_eigen_gensymmv_sort (gsl_vector * EVAL, gsl_matrix * EVEC, gsl_eigen_sort_t SORT_TYPE) This function simultaneously sorts the eigenvalues stored in the vector EVAL and the corresponding real eigenvectors stored in the columns of the matrix EVEC into ascending or descending order according to the value of the parameter SORT_TYPE as shown above. -- Function: int gsl_eigen_genhermv_sort (gsl_vector * EVAL, gsl_matrix_complex * EVEC, gsl_eigen_sort_t SORT_TYPE) This function simultaneously sorts the eigenvalues stored in the vector EVAL and the corresponding complex eigenvectors stored in the columns of the matrix EVEC into ascending or descending order according to the value of the parameter SORT_TYPE as shown above. -- Function: int gsl_eigen_genv_sort (gsl_vector_complex * ALPHA, gsl_vector * BETA, gsl_matrix_complex * EVEC, gsl_eigen_sort_t SORT_TYPE) This function simultaneously sorts the eigenvalues stored in the vectors (ALPHA, BETA) and the corresponding complex eigenvectors stored in the columns of the matrix EVEC into ascending or descending order according to the value of the parameter SORT_TYPE as shown above. Only `GSL_EIGEN_SORT_ABS_ASC' and `GSL_EIGEN_SORT_ABS_DESC' are supported due to the eigenvalues being complex.  File: gsl-ref.info, Node: Eigenvalue and Eigenvector Examples, Next: Eigenvalue and Eigenvector References, Prev: Sorting Eigenvalues and Eigenvectors, Up: Eigensystems 15.8 Examples ============= The following program computes the eigenvalues and eigenvectors of the 4-th order Hilbert matrix, H(i,j) = 1/(i + j + 1). #include #include #include int main (void) { double data[] = { 1.0 , 1/2.0, 1/3.0, 1/4.0, 1/2.0, 1/3.0, 1/4.0, 1/5.0, 1/3.0, 1/4.0, 1/5.0, 1/6.0, 1/4.0, 1/5.0, 1/6.0, 1/7.0 }; gsl_matrix_view m = gsl_matrix_view_array (data, 4, 4); gsl_vector *eval = gsl_vector_alloc (4); gsl_matrix *evec = gsl_matrix_alloc (4, 4); gsl_eigen_symmv_workspace * w = gsl_eigen_symmv_alloc (4); gsl_eigen_symmv (&m.matrix, eval, evec, w); gsl_eigen_symmv_free (w); gsl_eigen_symmv_sort (eval, evec, GSL_EIGEN_SORT_ABS_ASC); { int i; for (i = 0; i < 4; i++) { double eval_i = gsl_vector_get (eval, i); gsl_vector_view evec_i = gsl_matrix_column (evec, i); printf ("eigenvalue = %g\n", eval_i); printf ("eigenvector = \n"); gsl_vector_fprintf (stdout, &evec_i.vector, "%g"); } } gsl_vector_free (eval); gsl_matrix_free (evec); return 0; } Here is the beginning of the output from the program, $ ./a.out eigenvalue = 9.67023e-05 eigenvector = -0.0291933 0.328712 -0.791411 0.514553 ... This can be compared with the corresponding output from GNU OCTAVE, octave> [v,d] = eig(hilb(4)); octave> diag(d) ans = 9.6702e-05 6.7383e-03 1.6914e-01 1.5002e+00 octave> v v = 0.029193 0.179186 -0.582076 0.792608 -0.328712 -0.741918 0.370502 0.451923 0.791411 0.100228 0.509579 0.322416 -0.514553 0.638283 0.514048 0.252161 Note that the eigenvectors can differ by a change of sign, since the sign of an eigenvector is arbitrary. The following program illustrates the use of the nonsymmetric eigensolver, by computing the eigenvalues and eigenvectors of the Vandermonde matrix V(x;i,j) = x_i^{n - j} with x = (-1,-2,3,4). #include #include #include int main (void) { double data[] = { -1.0, 1.0, -1.0, 1.0, -8.0, 4.0, -2.0, 1.0, 27.0, 9.0, 3.0, 1.0, 64.0, 16.0, 4.0, 1.0 }; gsl_matrix_view m = gsl_matrix_view_array (data, 4, 4); gsl_vector_complex *eval = gsl_vector_complex_alloc (4); gsl_matrix_complex *evec = gsl_matrix_complex_alloc (4, 4); gsl_eigen_nonsymmv_workspace * w = gsl_eigen_nonsymmv_alloc (4); gsl_eigen_nonsymmv (&m.matrix, eval, evec, w); gsl_eigen_nonsymmv_free (w); gsl_eigen_nonsymmv_sort (eval, evec, GSL_EIGEN_SORT_ABS_DESC); { int i, j; for (i = 0; i < 4; i++) { gsl_complex eval_i = gsl_vector_complex_get (eval, i); gsl_vector_complex_view evec_i = gsl_matrix_complex_column (evec, i); printf ("eigenvalue = %g + %gi\n", GSL_REAL(eval_i), GSL_IMAG(eval_i)); printf ("eigenvector = \n"); for (j = 0; j < 4; ++j) { gsl_complex z = gsl_vector_complex_get(&evec_i.vector, j); printf("%g + %gi\n", GSL_REAL(z), GSL_IMAG(z)); } } } gsl_vector_complex_free(eval); gsl_matrix_complex_free(evec); return 0; } Here is the beginning of the output from the program, $ ./a.out eigenvalue = -6.41391 + 0i eigenvector = -0.0998822 + 0i -0.111251 + 0i 0.292501 + 0i 0.944505 + 0i eigenvalue = 5.54555 + 3.08545i eigenvector = -0.043487 + -0.0076308i 0.0642377 + -0.142127i -0.515253 + 0.0405118i -0.840592 + -0.00148565i ... This can be compared with the corresponding output from GNU OCTAVE, octave> [v,d] = eig(vander([-1 -2 3 4])); octave> diag(d) ans = -6.4139 + 0.0000i 5.5456 + 3.0854i 5.5456 - 3.0854i 2.3228 + 0.0000i octave> v v = Columns 1 through 3: -0.09988 + 0.00000i -0.04350 - 0.00755i -0.04350 + 0.00755i -0.11125 + 0.00000i 0.06399 - 0.14224i 0.06399 + 0.14224i 0.29250 + 0.00000i -0.51518 + 0.04142i -0.51518 - 0.04142i 0.94451 + 0.00000i -0.84059 + 0.00000i -0.84059 - 0.00000i Column 4: -0.14493 + 0.00000i 0.35660 + 0.00000i 0.91937 + 0.00000i 0.08118 + 0.00000i Note that the eigenvectors corresponding to the eigenvalue 5.54555 + 3.08545i differ by the multiplicative constant 0.9999984 + 0.0017674i which is an arbitrary phase factor of magnitude 1.  File: gsl-ref.info, Node: Eigenvalue and Eigenvector References, Prev: Eigenvalue and Eigenvector Examples, Up: Eigensystems 15.9 References and Further Reading =================================== Further information on the algorithms described in this section can be found in the following book, G. H. Golub, C. F. Van Loan, `Matrix Computations' (3rd Ed, 1996), Johns Hopkins University Press, ISBN 0-8018-5414-8. Further information on the generalized eigensystems QZ algorithm can be found in this paper, C. Moler, G. Stewart, "An Algorithm for Generalized Matrix Eigenvalue Problems", SIAM J. Numer. Anal., Vol 10, No 2, 1973. Eigensystem routines for very large matrices can be found in the Fortran library LAPACK. The LAPACK library is described in, `LAPACK Users' Guide' (Third Edition, 1999), Published by SIAM, ISBN 0-89871-447-8. `http://www.netlib.org/lapack' The LAPACK source code can be found at the website above along with an online copy of the users guide.  File: gsl-ref.info, Node: Fast Fourier Transforms, Next: Numerical Integration, Prev: Eigensystems, Up: Top 16 Fast Fourier Transforms (FFTs) ********************************* This chapter describes functions for performing Fast Fourier Transforms (FFTs). The library includes radix-2 routines (for lengths which are a power of two) and mixed-radix routines (which work for any length). For efficiency there are separate versions of the routines for real data and for complex data. The mixed-radix routines are a reimplementation of the FFTPACK library of Paul Swarztrauber. Fortran code for FFTPACK is available on Netlib (FFTPACK also includes some routines for sine and cosine transforms but these are currently not available in GSL). For details and derivations of the underlying algorithms consult the document `GSL FFT Algorithms' (*note FFT References and Further Reading::) * Menu: * Mathematical Definitions:: * Overview of complex data FFTs:: * Radix-2 FFT routines for complex data:: * Mixed-radix FFT routines for complex data:: * Overview of real data FFTs:: * Radix-2 FFT routines for real data:: * Mixed-radix FFT routines for real data:: * FFT References and Further Reading::  File: gsl-ref.info, Node: Mathematical Definitions, Next: Overview of complex data FFTs, Up: Fast Fourier Transforms 16.1 Mathematical Definitions ============================= Fast Fourier Transforms are efficient algorithms for calculating the discrete Fourier transform (DFT), x_j = \sum_{k=0}^{n-1} z_k \exp(-2\pi i j k / n) The DFT usually arises as an approximation to the continuous Fourier transform when functions are sampled at discrete intervals in space or time. The naive evaluation of the discrete Fourier transform is a matrix-vector multiplication W\vec{z}. A general matrix-vector multiplication takes O(n^2) operations for n data-points. Fast Fourier transform algorithms use a divide-and-conquer strategy to factorize the matrix W into smaller sub-matrices, corresponding to the integer factors of the length n. If n can be factorized into a product of integers f_1 f_2 ... f_m then the DFT can be computed in O(n \sum f_i) operations. For a radix-2 FFT this gives an operation count of O(n \log_2 n). All the FFT functions offer three types of transform: forwards, inverse and backwards, based on the same mathematical definitions. The definition of the "forward Fourier transform", x = FFT(z), is, x_j = \sum_{k=0}^{n-1} z_k \exp(-2\pi i j k / n) and the definition of the "inverse Fourier transform", x = IFFT(z), is, z_j = {1 \over n} \sum_{k=0}^{n-1} x_k \exp(2\pi i j k / n). The factor of 1/n makes this a true inverse. For example, a call to `gsl_fft_complex_forward' followed by a call to `gsl_fft_complex_inverse' should return the original data (within numerical errors). In general there are two possible choices for the sign of the exponential in the transform/ inverse-transform pair. GSL follows the same convention as FFTPACK, using a negative exponential for the forward transform. The advantage of this convention is that the inverse transform recreates the original function with simple Fourier synthesis. Numerical Recipes uses the opposite convention, a positive exponential in the forward transform. The "backwards FFT" is simply our terminology for an unscaled version of the inverse FFT, z^{backwards}_j = \sum_{k=0}^{n-1} x_k \exp(2\pi i j k / n). When the overall scale of the result is unimportant it is often convenient to use the backwards FFT instead of the inverse to save unnecessary divisions.  File: gsl-ref.info, Node: Overview of complex data FFTs, Next: Radix-2 FFT routines for complex data, Prev: Mathematical Definitions, Up: Fast Fourier Transforms 16.2 Overview of complex data FFTs ================================== The inputs and outputs for the complex FFT routines are "packed arrays" of floating point numbers. In a packed array the real and imaginary parts of each complex number are placed in alternate neighboring elements. For example, the following definition of a packed array of length 6, double x[3*2]; gsl_complex_packed_array data = x; can be used to hold an array of three complex numbers, `z[3]', in the following way, data[0] = Re(z[0]) data[1] = Im(z[0]) data[2] = Re(z[1]) data[3] = Im(z[1]) data[4] = Re(z[2]) data[5] = Im(z[2]) The array indices for the data have the same ordering as those in the definition of the DFT--i.e. there are no index transformations or permutations of the data. A "stride" parameter allows the user to perform transforms on the elements `z[stride*i]' instead of `z[i]'. A stride greater than 1 can be used to take an in-place FFT of the column of a matrix. A stride of 1 accesses the array without any additional spacing between elements. To perform an FFT on a vector argument, such as `gsl_vector_complex * v', use the following definitions (or their equivalents) when calling the functions described in this chapter: gsl_complex_packed_array data = v->data; size_t stride = v->stride; size_t n = v->size; For physical applications it is important to remember that the index appearing in the DFT does not correspond directly to a physical frequency. If the time-step of the DFT is \Delta then the frequency-domain includes both positive and negative frequencies, ranging from -1/(2\Delta) through 0 to +1/(2\Delta). The positive frequencies are stored from the beginning of the array up to the middle, and the negative frequencies are stored backwards from the end of the array. Here is a table which shows the layout of the array DATA, and the correspondence between the time-domain data z, and the frequency-domain data x. index z x = FFT(z) 0 z(t = 0) x(f = 0) 1 z(t = 1) x(f = 1/(n Delta)) 2 z(t = 2) x(f = 2/(n Delta)) . ........ .................. n/2 z(t = n/2) x(f = +1/(2 Delta), -1/(2 Delta)) . ........ .................. n-3 z(t = n-3) x(f = -3/(n Delta)) n-2 z(t = n-2) x(f = -2/(n Delta)) n-1 z(t = n-1) x(f = -1/(n Delta)) When n is even the location n/2 contains the most positive and negative frequencies (+1/(2 \Delta), -1/(2 \Delta)) which are equivalent. If n is odd then general structure of the table above still applies, but n/2 does not appear.  File: gsl-ref.info, Node: Radix-2 FFT routines for complex data, Next: Mixed-radix FFT routines for complex data, Prev: Overview of complex data FFTs, Up: Fast Fourier Transforms 16.3 Radix-2 FFT routines for complex data ========================================== The radix-2 algorithms described in this section are simple and compact, although not necessarily the most efficient. They use the Cooley-Tukey algorithm to compute in-place complex FFTs for lengths which are a power of 2--no additional storage is required. The corresponding self-sorting mixed-radix routines offer better performance at the expense of requiring additional working space. All the functions described in this section are declared in the header file `gsl_fft_complex.h'. -- Function: int gsl_fft_complex_radix2_forward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) -- Function: int gsl_fft_complex_radix2_transform (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, gsl_fft_direction SIGN) -- Function: int gsl_fft_complex_radix2_backward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) -- Function: int gsl_fft_complex_radix2_inverse (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) These functions compute forward, backward and inverse FFTs of length N with stride STRIDE, on the packed complex array DATA using an in-place radix-2 decimation-in-time algorithm. The length of the transform N is restricted to powers of two. For the `transform' version of the function the SIGN argument can be either `forward' (-1) or `backward' (+1). The functions return a value of `GSL_SUCCESS' if no errors were detected, or `GSL_EDOM' if the length of the data N is not a power of two. -- Function: int gsl_fft_complex_radix2_dif_forward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) -- Function: int gsl_fft_complex_radix2_dif_transform (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, gsl_fft_direction SIGN) -- Function: int gsl_fft_complex_radix2_dif_backward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) -- Function: int gsl_fft_complex_radix2_dif_inverse (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) These are decimation-in-frequency versions of the radix-2 FFT functions. Here is an example program which computes the FFT of a short pulse in a sample of length 128. To make the resulting Fourier transform real the pulse is defined for equal positive and negative times (-10 ... 10), where the negative times wrap around the end of the array. #include #include #include #include #define REAL(z,i) ((z)[2*(i)]) #define IMAG(z,i) ((z)[2*(i)+1]) int main (void) { int i; double data[2*128]; for (i = 0; i < 128; i++) { REAL(data,i) = 0.0; IMAG(data,i) = 0.0; } REAL(data,0) = 1.0; for (i = 1; i <= 10; i++) { REAL(data,i) = REAL(data,128-i) = 1.0; } for (i = 0; i < 128; i++) { printf ("%d %e %e\n", i, REAL(data,i), IMAG(data,i)); } printf ("\n"); gsl_fft_complex_radix2_forward (data, 1, 128); for (i = 0; i < 128; i++) { printf ("%d %e %e\n", i, REAL(data,i)/sqrt(128), IMAG(data,i)/sqrt(128)); } return 0; } Note that we have assumed that the program is using the default error handler (which calls `abort' for any errors). If you are not using a safe error handler you would need to check the return status of `gsl_fft_complex_radix2_forward'. The transformed data is rescaled by 1/\sqrt n so that it fits on the same plot as the input. Only the real part is shown, by the choice of the input data the imaginary part is zero. Allowing for the wrap-around of negative times at t=128, and working in units of k/n, the DFT approximates the continuum Fourier transform, giving a modulated sine function.  File: gsl-ref.info, Node: Mixed-radix FFT routines for complex data, Next: Overview of real data FFTs, Prev: Radix-2 FFT routines for complex data, Up: Fast Fourier Transforms 16.4 Mixed-radix FFT routines for complex data ============================================== This section describes mixed-radix FFT algorithms for complex data. The mixed-radix functions work for FFTs of any length. They are a reimplementation of Paul Swarztrauber's Fortran FFTPACK library. The theory is explained in the review article `Self-sorting Mixed-radix FFTs' by Clive Temperton. The routines here use the same indexing scheme and basic algorithms as FFTPACK. The mixed-radix algorithm is based on sub-transform modules--highly optimized small length FFTs which are combined to create larger FFTs. There are efficient modules for factors of 2, 3, 4, 5, 6 and 7. The modules for the composite factors of 4 and 6 are faster than combining the modules for 2*2 and 2*3. For factors which are not implemented as modules there is a fall-back to a general length-n module which uses Singleton's method for efficiently computing a DFT. This module is O(n^2), and slower than a dedicated module would be but works for any length n. Of course, lengths which use the general length-n module will still be factorized as much as possible. For example, a length of 143 will be factorized into 11*13. Large prime factors are the worst case scenario, e.g. as found in n=2*3*99991, and should be avoided because their O(n^2) scaling will dominate the run-time (consult the document `GSL FFT Algorithms' included in the GSL distribution if you encounter this problem). The mixed-radix initialization function `gsl_fft_complex_wavetable_alloc' returns the list of factors chosen by the library for a given length n. It can be used to check how well the length has been factorized, and estimate the run-time. To a first approximation the run-time scales as n \sum f_i, where the f_i are the factors of n. For programs under user control you may wish to issue a warning that the transform will be slow when the length is poorly factorized. If you frequently encounter data lengths which cannot be factorized using the existing small-prime modules consult `GSL FFT Algorithms' for details on adding support for other factors. All the functions described in this section are declared in the header file `gsl_fft_complex.h'. -- Function: gsl_fft_complex_wavetable * gsl_fft_complex_wavetable_alloc (size_t N) This function prepares a trigonometric lookup table for a complex FFT of length N. The function returns a pointer to the newly allocated `gsl_fft_complex_wavetable' if no errors were detected, and a null pointer in the case of error. The length N is factorized into a product of subtransforms, and the factors and their trigonometric coefficients are stored in the wavetable. The trigonometric coefficients are computed using direct calls to `sin' and `cos', for accuracy. Recursion relations could be used to compute the lookup table faster, but if an application performs many FFTs of the same length then this computation is a one-off overhead which does not affect the final throughput. The wavetable structure can be used repeatedly for any transform of the same length. The table is not modified by calls to any of the other FFT functions. The same wavetable can be used for both forward and backward (or inverse) transforms of a given length. -- Function: void gsl_fft_complex_wavetable_free (gsl_fft_complex_wavetable * WAVETABLE) This function frees the memory associated with the wavetable WAVETABLE. The wavetable can be freed if no further FFTs of the same length will be needed. These functions operate on a `gsl_fft_complex_wavetable' structure which contains internal parameters for the FFT. It is not necessary to set any of the components directly but it can sometimes be useful to examine them. For example, the chosen factorization of the FFT length is given and can be used to provide an estimate of the run-time or numerical error. The wavetable structure is declared in the header file `gsl_fft_complex.h'. -- Data Type: gsl_fft_complex_wavetable This is a structure that holds the factorization and trigonometric lookup tables for the mixed radix fft algorithm. It has the following components: `size_t n' This is the number of complex data points `size_t nf' This is the number of factors that the length `n' was decomposed into. `size_t factor[64]' This is the array of factors. Only the first `nf' elements are used. `gsl_complex * trig' This is a pointer to a preallocated trigonometric lookup table of `n' complex elements. `gsl_complex * twiddle[64]' This is an array of pointers into `trig', giving the twiddle factors for each pass. The mixed radix algorithms require additional working space to hold the intermediate steps of the transform. -- Function: gsl_fft_complex_workspace * gsl_fft_complex_workspace_alloc (size_t N) This function allocates a workspace for a complex transform of length N. -- Function: void gsl_fft_complex_workspace_free (gsl_fft_complex_workspace * WORKSPACE) This function frees the memory associated with the workspace WORKSPACE. The workspace can be freed if no further FFTs of the same length will be needed. The following functions compute the transform, -- Function: int gsl_fft_complex_forward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, const gsl_fft_complex_wavetable * WAVETABLE, gsl_fft_complex_workspace * WORK) -- Function: int gsl_fft_complex_transform (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, const gsl_fft_complex_wavetable * WAVETABLE, gsl_fft_complex_workspace * WORK, gsl_fft_direction SIGN) -- Function: int gsl_fft_complex_backward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, const gsl_fft_complex_wavetable * WAVETABLE, gsl_fft_complex_workspace * WORK) -- Function: int gsl_fft_complex_inverse (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, const gsl_fft_complex_wavetable * WAVETABLE, gsl_fft_complex_workspace * WORK) These functions compute forward, backward and inverse FFTs of length N with stride STRIDE, on the packed complex array DATA, using a mixed radix decimation-in-frequency algorithm. There is no restriction on the length N. Efficient modules are provided for subtransforms of length 2, 3, 4, 5, 6 and 7. Any remaining factors are computed with a slow, O(n^2), general-n module. The caller must supply a WAVETABLE containing the trigonometric lookup tables and a workspace WORK. For the `transform' version of the function the SIGN argument can be either `forward' (-1) or `backward' (+1). The functions return a value of `0' if no errors were detected. The following `gsl_errno' conditions are defined for these functions: `GSL_EDOM' The length of the data N is not a positive integer (i.e. N is zero). `GSL_EINVAL' The length of the data N and the length used to compute the given WAVETABLE do not match. Here is an example program which computes the FFT of a short pulse in a sample of length 630 (=2*3*3*5*7) using the mixed-radix algorithm. #include #include #include #include #define REAL(z,i) ((z)[2*(i)]) #define IMAG(z,i) ((z)[2*(i)+1]) int main (void) { int i; const int n = 630; double data[2*n]; gsl_fft_complex_wavetable * wavetable; gsl_fft_complex_workspace * workspace; for (i = 0; i < n; i++) { REAL(data,i) = 0.0; IMAG(data,i) = 0.0; } data[0] = 1.0; for (i = 1; i <= 10; i++) { REAL(data,i) = REAL(data,n-i) = 1.0; } for (i = 0; i < n; i++) { printf ("%d: %e %e\n", i, REAL(data,i), IMAG(data,i)); } printf ("\n"); wavetable = gsl_fft_complex_wavetable_alloc (n); workspace = gsl_fft_complex_workspace_alloc (n); for (i = 0; i < wavetable->nf; i++) { printf ("# factor %d: %d\n", i, wavetable->factor[i]); } gsl_fft_complex_forward (data, 1, n, wavetable, workspace); for (i = 0; i < n; i++) { printf ("%d: %e %e\n", i, REAL(data,i), IMAG(data,i)); } gsl_fft_complex_wavetable_free (wavetable); gsl_fft_complex_workspace_free (workspace); return 0; } Note that we have assumed that the program is using the default `gsl' error handler (which calls `abort' for any errors). If you are not using a safe error handler you would need to check the return status of all the `gsl' routines.  File: gsl-ref.info, Node: Overview of real data FFTs, Next: Radix-2 FFT routines for real data, Prev: Mixed-radix FFT routines for complex data, Up: Fast Fourier Transforms 16.5 Overview of real data FFTs =============================== The functions for real data are similar to those for complex data. However, there is an important difference between forward and inverse transforms. The Fourier transform of a real sequence is not real. It is a complex sequence with a special symmetry: z_k = z_{n-k}^* A sequence with this symmetry is called "conjugate-complex" or "half-complex". This different structure requires different storage layouts for the forward transform (from real to half-complex) and inverse transform (from half-complex back to real). As a consequence the routines are divided into two sets: functions in `gsl_fft_real' which operate on real sequences and functions in `gsl_fft_halfcomplex' which operate on half-complex sequences. Functions in `gsl_fft_real' compute the frequency coefficients of a real sequence. The half-complex coefficients c of a real sequence x are given by Fourier analysis, c_k = \sum_{j=0}^{n-1} x_j \exp(-2 \pi i j k /n) Functions in `gsl_fft_halfcomplex' compute inverse or backwards transforms. They reconstruct real sequences by Fourier synthesis from their half-complex frequency coefficients, c, x_j = {1 \over n} \sum_{k=0}^{n-1} c_k \exp(2 \pi i j k /n) The symmetry of the half-complex sequence implies that only half of the complex numbers in the output need to be stored. The remaining half can be reconstructed using the half-complex symmetry condition. This works for all lengths, even and odd--when the length is even the middle value where k=n/2 is also real. Thus only N real numbers are required to store the half-complex sequence, and the transform of a real sequence can be stored in the same size array as the original data. The precise storage arrangements depend on the algorithm, and are different for radix-2 and mixed-radix routines. The radix-2 function operates in-place, which constrains the locations where each element can be stored. The restriction forces real and imaginary parts to be stored far apart. The mixed-radix algorithm does not have this restriction, and it stores the real and imaginary parts of a given term in neighboring locations (which is desirable for better locality of memory accesses).  File: gsl-ref.info, Node: Radix-2 FFT routines for real data, Next: Mixed-radix FFT routines for real data, Prev: Overview of real data FFTs, Up: Fast Fourier Transforms 16.6 Radix-2 FFT routines for real data ======================================= This section describes radix-2 FFT algorithms for real data. They use the Cooley-Tukey algorithm to compute in-place FFTs for lengths which are a power of 2. The radix-2 FFT functions for real data are declared in the header files `gsl_fft_real.h' -- Function: int gsl_fft_real_radix2_transform (double DATA[], size_t STRIDE, size_t N) This function computes an in-place radix-2 FFT of length N and stride STRIDE on the real array DATA. The output is a half-complex sequence, which is stored in-place. The arrangement of the half-complex terms uses the following scheme: for k < n/2 the real part of the k-th term is stored in location k, and the corresponding imaginary part is stored in location n-k. Terms with k > n/2 can be reconstructed using the symmetry z_k = z^*_{n-k}. The terms for k=0 and k=n/2 are both purely real, and count as a special case. Their real parts are stored in locations 0 and n/2 respectively, while their imaginary parts which are zero are not stored. The following table shows the correspondence between the output DATA and the equivalent results obtained by considering the input data as a complex sequence with zero imaginary part (assuming STRIDE=1), complex[0].real = data[0] complex[0].imag = 0 complex[1].real = data[1] complex[1].imag = data[n-1] ............... ................ complex[k].real = data[k] complex[k].imag = data[n-k] ............... ................ complex[n/2].real = data[n/2] complex[n/2].imag = 0 ............... ................ complex[k'].real = data[k] k' = n - k complex[k'].imag = -data[n-k] ............... ................ complex[n-1].real = data[1] complex[n-1].imag = -data[n-1] Note that the output data can be converted into the full complex sequence using the function `gsl_fft_halfcomplex_radix2_unpack' described below. The radix-2 FFT functions for halfcomplex data are declared in the header file `gsl_fft_halfcomplex.h'. -- Function: int gsl_fft_halfcomplex_radix2_inverse (double DATA[], size_t STRIDE, size_t N) -- Function: int gsl_fft_halfcomplex_radix2_backward (double DATA[], size_t STRIDE, size_t N) These functions compute the inverse or backwards in-place radix-2 FFT of length N and stride STRIDE on the half-complex sequence DATA stored according the output scheme used by `gsl_fft_real_radix2'. The result is a real array stored in natural order. -- Function: int gsl_fft_halfcomplex_radix2_unpack (const double HALFCOMPLEX_COEFFICIENT[], gsl_complex_packed_array COMPLEX_COEFFICIENT, size_t STRIDE, size_t N) This function converts HALFCOMPLEX_COEFFICIENT, an array of half-complex coefficients as returned by `gsl_fft_real_radix2_transform', into an ordinary complex array, COMPLEX_COEFFICIENT. It fills in the complex array using the symmetry z_k = z_{n-k}^* to reconstruct the redundant elements. The algorithm for the conversion is, complex_coefficient[0].real = halfcomplex_coefficient[0]; complex_coefficient[0].imag = 0.0; for (i = 1; i < n - i; i++) { double hc_real = halfcomplex_coefficient[i*stride]; double hc_imag = halfcomplex_coefficient[(n-i)*stride]; complex_coefficient[i*stride].real = hc_real; complex_coefficient[i*stride].imag = hc_imag; complex_coefficient[(n - i)*stride].real = hc_real; complex_coefficient[(n - i)*stride].imag = -hc_imag; } if (i == n - i) { complex_coefficient[i*stride].real = halfcomplex_coefficient[(n - 1)*stride]; complex_coefficient[i*stride].imag = 0.0; }  File: gsl-ref.info, Node: Mixed-radix FFT routines for real data, Next: FFT References and Further Reading, Prev: Radix-2 FFT routines for real data, Up: Fast Fourier Transforms 16.7 Mixed-radix FFT routines for real data =========================================== This section describes mixed-radix FFT algorithms for real data. The mixed-radix functions work for FFTs of any length. They are a reimplementation of the real-FFT routines in the Fortran FFTPACK library by Paul Swarztrauber. The theory behind the algorithm is explained in the article `Fast Mixed-Radix Real Fourier Transforms' by Clive Temperton. The routines here use the same indexing scheme and basic algorithms as FFTPACK. The functions use the FFTPACK storage convention for half-complex sequences. In this convention the half-complex transform of a real sequence is stored with frequencies in increasing order, starting at zero, with the real and imaginary parts of each frequency in neighboring locations. When a value is known to be real the imaginary part is not stored. The imaginary part of the zero-frequency component is never stored. It is known to be zero (since the zero frequency component is simply the sum of the input data (all real)). For a sequence of even length the imaginary part of the frequency n/2 is not stored either, since the symmetry z_k = z_{n-k}^* implies that this is purely real too. The storage scheme is best shown by some examples. The table below shows the output for an odd-length sequence, n=5. The two columns give the correspondence between the 5 values in the half-complex sequence returned by `gsl_fft_real_transform', HALFCOMPLEX[] and the values COMPLEX[] that would be returned if the same real input sequence were passed to `gsl_fft_complex_backward' as a complex sequence (with imaginary parts set to `0'), complex[0].real = halfcomplex[0] complex[0].imag = 0 complex[1].real = halfcomplex[1] complex[1].imag = halfcomplex[2] complex[2].real = halfcomplex[3] complex[2].imag = halfcomplex[4] complex[3].real = halfcomplex[3] complex[3].imag = -halfcomplex[4] complex[4].real = halfcomplex[1] complex[4].imag = -halfcomplex[2] The upper elements of the COMPLEX array, `complex[3]' and `complex[4]' are filled in using the symmetry condition. The imaginary part of the zero-frequency term `complex[0].imag' is known to be zero by the symmetry. The next table shows the output for an even-length sequence, n=6 In the even case there are two values which are purely real, complex[0].real = halfcomplex[0] complex[0].imag = 0 complex[1].real = halfcomplex[1] complex[1].imag = halfcomplex[2] complex[2].real = halfcomplex[3] complex[2].imag = halfcomplex[4] complex[3].real = halfcomplex[5] complex[3].imag = 0 complex[4].real = halfcomplex[3] complex[4].imag = -halfcomplex[4] complex[5].real = halfcomplex[1] complex[5].imag = -halfcomplex[2] The upper elements of the COMPLEX array, `complex[4]' and `complex[5]' are filled in using the symmetry condition. Both `complex[0].imag' and `complex[3].imag' are known to be zero. All these functions are declared in the header files `gsl_fft_real.h' and `gsl_fft_halfcomplex.h'. -- Function: gsl_fft_real_wavetable * gsl_fft_real_wavetable_alloc (size_t N) -- Function: gsl_fft_halfcomplex_wavetable * gsl_fft_halfcomplex_wavetable_alloc (size_t N) These functions prepare trigonometric lookup tables for an FFT of size n real elements. The functions return a pointer to the newly allocated struct if no errors were detected, and a null pointer in the case of error. The length N is factorized into a product of subtransforms, and the factors and their trigonometric coefficients are stored in the wavetable. The trigonometric coefficients are computed using direct calls to `sin' and `cos', for accuracy. Recursion relations could be used to compute the lookup table faster, but if an application performs many FFTs of the same length then computing the wavetable is a one-off overhead which does not affect the final throughput. The wavetable structure can be used repeatedly for any transform of the same length. The table is not modified by calls to any of the other FFT functions. The appropriate type of wavetable must be used for forward real or inverse half-complex transforms. -- Function: void gsl_fft_real_wavetable_free (gsl_fft_real_wavetable * WAVETABLE) -- Function: void gsl_fft_halfcomplex_wavetable_free (gsl_fft_halfcomplex_wavetable * WAVETABLE) These functions free the memory associated with the wavetable WAVETABLE. The wavetable can be freed if no further FFTs of the same length will be needed. The mixed radix algorithms require additional working space to hold the intermediate steps of the transform, -- Function: gsl_fft_real_workspace * gsl_fft_real_workspace_alloc (size_t N) This function allocates a workspace for a real transform of length N. The same workspace can be used for both forward real and inverse halfcomplex transforms. -- Function: void gsl_fft_real_workspace_free (gsl_fft_real_workspace * WORKSPACE) This function frees the memory associated with the workspace WORKSPACE. The workspace can be freed if no further FFTs of the same length will be needed. The following functions compute the transforms of real and half-complex data, -- Function: int gsl_fft_real_transform (double DATA[], size_t STRIDE, size_t N, const gsl_fft_real_wavetable * WAVETABLE, gsl_fft_real_workspace * WORK) -- Function: int gsl_fft_halfcomplex_transform (double DATA[], size_t STRIDE, size_t N, const gsl_fft_halfcomplex_wavetable * WAVETABLE, gsl_fft_real_workspace * WORK) These functions compute the FFT of DATA, a real or half-complex array of length N, using a mixed radix decimation-in-frequency algorithm. For `gsl_fft_real_transform' DATA is an array of time-ordered real data. For `gsl_fft_halfcomplex_transform' DATA contains Fourier coefficients in the half-complex ordering described above. There is no restriction on the length N. Efficient modules are provided for subtransforms of length 2, 3, 4 and 5. Any remaining factors are computed with a slow, O(n^2), general-n module. The caller must supply a WAVETABLE containing trigonometric lookup tables and a workspace WORK. -- Function: int gsl_fft_real_unpack (const double REAL_COEFFICIENT[], gsl_complex_packed_array COMPLEX_COEFFICIENT, size_t STRIDE, size_t N) This function converts a single real array, REAL_COEFFICIENT into an equivalent complex array, COMPLEX_COEFFICIENT, (with imaginary part set to zero), suitable for `gsl_fft_complex' routines. The algorithm for the conversion is simply, for (i = 0; i < n; i++) { complex_coefficient[i*stride].real = real_coefficient[i*stride]; complex_coefficient[i*stride].imag = 0.0; } -- Function: int gsl_fft_halfcomplex_unpack (const double HALFCOMPLEX_COEFFICIENT[], gsl_complex_packed_array COMPLEX_COEFFICIENT, size_t STRIDE, size_t N) This function converts HALFCOMPLEX_COEFFICIENT, an array of half-complex coefficients as returned by `gsl_fft_real_transform', into an ordinary complex array, COMPLEX_COEFFICIENT. It fills in the complex array using the symmetry z_k = z_{n-k}^* to reconstruct the redundant elements. The algorithm for the conversion is, complex_coefficient[0].real = halfcomplex_coefficient[0]; complex_coefficient[0].imag = 0.0; for (i = 1; i < n - i; i++) { double hc_real = halfcomplex_coefficient[(2 * i - 1)*stride]; double hc_imag = halfcomplex_coefficient[(2 * i)*stride]; complex_coefficient[i*stride].real = hc_real; complex_coefficient[i*stride].imag = hc_imag; complex_coefficient[(n - i)*stride].real = hc_real; complex_coefficient[(n - i)*stride].imag = -hc_imag; } if (i == n - i) { complex_coefficient[i*stride].real = halfcomplex_coefficient[(n - 1)*stride]; complex_coefficient[i*stride].imag = 0.0; } Here is an example program using `gsl_fft_real_transform' and `gsl_fft_halfcomplex_inverse'. It generates a real signal in the shape of a square pulse. The pulse is Fourier transformed to frequency space, and all but the lowest ten frequency components are removed from the array of Fourier coefficients returned by `gsl_fft_real_transform'. The remaining Fourier coefficients are transformed back to the time-domain, to give a filtered version of the square pulse. Since Fourier coefficients are stored using the half-complex symmetry both positive and negative frequencies are removed and the final filtered signal is also real. #include #include #include #include #include int main (void) { int i, n = 100; double data[n]; gsl_fft_real_wavetable * real; gsl_fft_halfcomplex_wavetable * hc; gsl_fft_real_workspace * work; for (i = 0; i < n; i++) { data[i] = 0.0; } for (i = n / 3; i < 2 * n / 3; i++) { data[i] = 1.0; } for (i = 0; i < n; i++) { printf ("%d: %e\n", i, data[i]); } printf ("\n"); work = gsl_fft_real_workspace_alloc (n); real = gsl_fft_real_wavetable_alloc (n); gsl_fft_real_transform (data, 1, n, real, work); gsl_fft_real_wavetable_free (real); for (i = 11; i < n; i++) { data[i] = 0; } hc = gsl_fft_halfcomplex_wavetable_alloc (n); gsl_fft_halfcomplex_inverse (data, 1, n, hc, work); gsl_fft_halfcomplex_wavetable_free (hc); for (i = 0; i < n; i++) { printf ("%d: %e\n", i, data[i]); } gsl_fft_real_workspace_free (work); return 0; }  File: gsl-ref.info, Node: FFT References and Further Reading, Prev: Mixed-radix FFT routines for real data, Up: Fast Fourier Transforms 16.8 References and Further Reading =================================== A good starting point for learning more about the FFT is the review article `Fast Fourier Transforms: A Tutorial Review and A State of the Art' by Duhamel and Vetterli, P. Duhamel and M. Vetterli. Fast Fourier transforms: A tutorial review and a state of the art. `Signal Processing', 19:259-299, 1990. To find out about the algorithms used in the GSL routines you may want to consult the document `GSL FFT Algorithms' (it is included in GSL, as `doc/fftalgorithms.tex'). This has general information on FFTs and explicit derivations of the implementation for each routine. There are also references to the relevant literature. For convenience some of the more important references are reproduced below. There are several introductory books on the FFT with example programs, such as `The Fast Fourier Transform' by Brigham and `DFT/FFT and Convolution Algorithms' by Burrus and Parks, E. Oran Brigham. `The Fast Fourier Transform'. Prentice Hall, 1974. C. S. Burrus and T. W. Parks. `DFT/FFT and Convolution Algorithms'. Wiley, 1984. Both these introductory books cover the radix-2 FFT in some detail. The mixed-radix algorithm at the heart of the FFTPACK routines is reviewed in Clive Temperton's paper, Clive Temperton. Self-sorting mixed-radix fast Fourier transforms. `Journal of Computational Physics', 52(1):1-23, 1983. The derivation of FFTs for real-valued data is explained in the following two articles, Henrik V. Sorenson, Douglas L. Jones, Michael T. Heideman, and C. Sidney Burrus. Real-valued fast Fourier transform algorithms. `IEEE Transactions on Acoustics, Speech, and Signal Processing', ASSP-35(6):849-863, 1987. Clive Temperton. Fast mixed-radix real Fourier transforms. `Journal of Computational Physics', 52:340-350, 1983. In 1979 the IEEE published a compendium of carefully-reviewed Fortran FFT programs in `Programs for Digital Signal Processing'. It is a useful reference for implementations of many different FFT algorithms, Digital Signal Processing Committee and IEEE Acoustics, Speech, and Signal Processing Committee, editors. `Programs for Digital Signal Processing'. IEEE Press, 1979. For large-scale FFT work we recommend the use of the dedicated FFTW library by Frigo and Johnson. The FFTW library is self-optimizing--it automatically tunes itself for each hardware platform in order to achieve maximum performance. It is available under the GNU GPL. FFTW Website, `http://www.fftw.org/' The source code for FFTPACK is available from Netlib, FFTPACK, `http://www.netlib.org/fftpack/'  File: gsl-ref.info, Node: Numerical Integration, Next: Random Number Generation, Prev: Fast Fourier Transforms, Up: Top 17 Numerical Integration ************************ This chapter describes routines for performing numerical integration (quadrature) of a function in one dimension. There are routines for adaptive and non-adaptive integration of general functions, with specialised routines for specific cases. These include integration over infinite and semi-infinite ranges, singular integrals, including logarithmic singularities, computation of Cauchy principal values and oscillatory integrals. The library reimplements the algorithms used in QUADPACK, a numerical integration package written by Piessens, de Doncker-Kapenga, Ueberhuber and Kahaner. Fortran code for QUADPACK is available on Netlib. Also included are non-adaptive, fixed-order Gauss-Legendre integration routines with high precision coefficients by Pavel Holoborodko. The functions described in this chapter are declared in the header file `gsl_integration.h'. * Menu: * Numerical Integration Introduction:: * QNG non-adaptive Gauss-Kronrod integration:: * QAG adaptive integration:: * QAGS adaptive integration with singularities:: * QAGP adaptive integration with known singular points:: * QAGI adaptive integration on infinite intervals:: * QAWC adaptive integration for Cauchy principal values:: * QAWS adaptive integration for singular functions:: * QAWO adaptive integration for oscillatory functions:: * QAWF adaptive integration for Fourier integrals:: * CQUAD doubly-adaptive integration:: * Fixed order Gauss-Legendre integration:: * Numerical integration error codes:: * Numerical integration examples:: * Numerical integration References and Further Reading::  File: gsl-ref.info, Node: Numerical Integration Introduction, Next: QNG non-adaptive Gauss-Kronrod integration, Up: Numerical Integration 17.1 Introduction ================= Each algorithm computes an approximation to a definite integral of the form, I = \int_a^b f(x) w(x) dx where w(x) is a weight function (for general integrands w(x)=1). The user provides absolute and relative error bounds (epsabs, epsrel) which specify the following accuracy requirement, |RESULT - I| <= max(epsabs, epsrel |I|) where RESULT is the numerical approximation obtained by the algorithm. The algorithms attempt to estimate the absolute error ABSERR = |RESULT - I| in such a way that the following inequality holds, |RESULT - I| <= ABSERR <= max(epsabs, epsrel |I|) In short, the routines return the first approximation which has an absolute error smaller than epsabs or a relative error smaller than epsrel. Note that this is an either-or constraint, not simultaneous. To compute to a specified absolute error, set epsrel to zero. To compute to a specified relative error, set epsabs to zero. The routines will fail to converge if the error bounds are too stringent, but always return the best approximation obtained up to that stage. The algorithms in QUADPACK use a naming convention based on the following letters, `Q' - quadrature routine `N' - non-adaptive integrator `A' - adaptive integrator `G' - general integrand (user-defined) `W' - weight function with integrand `S' - singularities can be more readily integrated `P' - points of special difficulty can be supplied `I' - infinite range of integration `O' - oscillatory weight function, cos or sin `F' - Fourier integral `C' - Cauchy principal value The algorithms are built on pairs of quadrature rules, a higher order rule and a lower order rule. The higher order rule is used to compute the best approximation to an integral over a small range. The difference between the results of the higher order rule and the lower order rule gives an estimate of the error in the approximation. * Menu: * Integrands without weight functions:: * Integrands with weight functions:: * Integrands with singular weight functions::  File: gsl-ref.info, Node: Integrands without weight functions, Next: Integrands with weight functions, Up: Numerical Integration Introduction 17.1.1 Integrands without weight functions ------------------------------------------ The algorithms for general functions (without a weight function) are based on Gauss-Kronrod rules. A Gauss-Kronrod rule begins with a classical Gaussian quadrature rule of order m. This is extended with additional points between each of the abscissae to give a higher order Kronrod rule of order 2m+1. The Kronrod rule is efficient because it reuses existing function evaluations from the Gaussian rule. The higher order Kronrod rule is used as the best approximation to the integral, and the difference between the two rules is used as an estimate of the error in the approximation.  File: gsl-ref.info, Node: Integrands with weight functions, Next: Integrands with singular weight functions, Prev: Integrands without weight functions, Up: Numerical Integration Introduction 17.1.2 Integrands with weight functions --------------------------------------- For integrands with weight functions the algorithms use Clenshaw-Curtis quadrature rules. A Clenshaw-Curtis rule begins with an n-th order Chebyshev polynomial approximation to the integrand. This polynomial can be integrated exactly to give an approximation to the integral of the original function. The Chebyshev expansion can be extended to higher orders to improve the approximation and provide an estimate of the error.  File: gsl-ref.info, Node: Integrands with singular weight functions, Prev: Integrands with weight functions, Up: Numerical Integration Introduction 17.1.3 Integrands with singular weight functions ------------------------------------------------ The presence of singularities (or other behavior) in the integrand can cause slow convergence in the Chebyshev approximation. The modified Clenshaw-Curtis rules used in QUADPACK separate out several common weight functions which cause slow convergence. These weight functions are integrated analytically against the Chebyshev polynomials to precompute "modified Chebyshev moments". Combining the moments with the Chebyshev approximation to the function gives the desired integral. The use of analytic integration for the singular part of the function allows exact cancellations and substantially improves the overall convergence behavior of the integration.  File: gsl-ref.info, Node: QNG non-adaptive Gauss-Kronrod integration, Next: QAG adaptive integration, Prev: Numerical Integration Introduction, Up: Numerical Integration 17.2 QNG non-adaptive Gauss-Kronrod integration =============================================== The QNG algorithm is a non-adaptive procedure which uses fixed Gauss-Kronrod-Patterson abscissae to sample the integrand at a maximum of 87 points. It is provided for fast integration of smooth functions. -- Function: int gsl_integration_qng (const gsl_function * F, double A, double B, double EPSABS, double EPSREL, double * RESULT, double * ABSERR, size_t * NEVAL) This function applies the Gauss-Kronrod 10-point, 21-point, 43-point and 87-point integration rules in succession until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, EPSABS and EPSREL. The function returns the final approximation, RESULT, an estimate of the absolute error, ABSERR and the number of function evaluations used, NEVAL. The Gauss-Kronrod rules are designed in such a way that each rule uses all the results of its predecessors, in order to minimize the total number of function evaluations.  File: gsl-ref.info, Node: QAG adaptive integration, Next: QAGS adaptive integration with singularities, Prev: QNG non-adaptive Gauss-Kronrod integration, Up: Numerical Integration 17.3 QAG adaptive integration ============================= The QAG algorithm is a simple adaptive integration procedure. The integration region is divided into subintervals, and on each iteration the subinterval with the largest estimated error is bisected. This reduces the overall error rapidly, as the subintervals become concentrated around local difficulties in the integrand. These subintervals are managed by a `gsl_integration_workspace' struct, which handles the memory for the subinterval ranges, results and error estimates. -- Function: gsl_integration_workspace * gsl_integration_workspace_alloc (size_t N) This function allocates a workspace sufficient to hold N double precision intervals, their integration results and error estimates. -- Function: void gsl_integration_workspace_free (gsl_integration_workspace * W) This function frees the memory associated with the workspace W. -- Function: int gsl_integration_qag (const gsl_function * F, double A, double B, double EPSABS, double EPSREL, size_t LIMIT, int KEY, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function applies an integration rule adaptively until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, EPSABS and EPSREL. The function returns the final approximation, RESULT, and an estimate of the absolute error, ABSERR. The integration rule is determined by the value of KEY, which should be chosen from the following symbolic names, GSL_INTEG_GAUSS15 (key = 1) GSL_INTEG_GAUSS21 (key = 2) GSL_INTEG_GAUSS31 (key = 3) GSL_INTEG_GAUSS41 (key = 4) GSL_INTEG_GAUSS51 (key = 5) GSL_INTEG_GAUSS61 (key = 6) corresponding to the 15, 21, 31, 41, 51 and 61 point Gauss-Kronrod rules. The higher-order rules give better accuracy for smooth functions, while lower-order rules save time when the function contains local difficulties, such as discontinuities. On each iteration the adaptive integration strategy bisects the interval with the largest error estimate. The subintervals and their results are stored in the memory provided by WORKSPACE. The maximum number of subintervals is given by LIMIT, which may not exceed the allocated size of the workspace.  File: gsl-ref.info, Node: QAGS adaptive integration with singularities, Next: QAGP adaptive integration with known singular points, Prev: QAG adaptive integration, Up: Numerical Integration 17.4 QAGS adaptive integration with singularities ================================================= The presence of an integrable singularity in the integration region causes an adaptive routine to concentrate new subintervals around the singularity. As the subintervals decrease in size the successive approximations to the integral converge in a limiting fashion. This approach to the limit can be accelerated using an extrapolation procedure. The QAGS algorithm combines adaptive bisection with the Wynn epsilon-algorithm to speed up the integration of many types of integrable singularities. -- Function: int gsl_integration_qags (const gsl_function * F, double A, double B, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function applies the Gauss-Kronrod 21-point integration rule adaptively until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, EPSABS and EPSREL. The results are extrapolated using the epsilon-algorithm, which accelerates the convergence of the integral in the presence of discontinuities and integrable singularities. The function returns the final approximation from the extrapolation, RESULT, and an estimate of the absolute error, ABSERR. The subintervals and their results are stored in the memory provided by WORKSPACE. The maximum number of subintervals is given by LIMIT, which may not exceed the allocated size of the workspace.  File: gsl-ref.info, Node: QAGP adaptive integration with known singular points, Next: QAGI adaptive integration on infinite intervals, Prev: QAGS adaptive integration with singularities, Up: Numerical Integration 17.5 QAGP adaptive integration with known singular points ========================================================= -- Function: int gsl_integration_qagp (const gsl_function * F, double * PTS, size_t NPTS, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function applies the adaptive integration algorithm QAGS taking account of the user-supplied locations of singular points. The array PTS of length NPTS should contain the endpoints of the integration ranges defined by the integration region and locations of the singularities. For example, to integrate over the region (a,b) with break-points at x_1, x_2, x_3 (where a < x_1 < x_2 < x_3 < b) the following PTS array should be used pts[0] = a pts[1] = x_1 pts[2] = x_2 pts[3] = x_3 pts[4] = b with NPTS = 5. If you know the locations of the singular points in the integration region then this routine will be faster than `QAGS'.  File: gsl-ref.info, Node: QAGI adaptive integration on infinite intervals, Next: QAWC adaptive integration for Cauchy principal values, Prev: QAGP adaptive integration with known singular points, Up: Numerical Integration 17.6 QAGI adaptive integration on infinite intervals ==================================================== -- Function: int gsl_integration_qagi (gsl_function * F, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the integral of the function F over the infinite interval (-\infty,+\infty). The integral is mapped onto the semi-open interval (0,1] using the transformation x = (1-t)/t, \int_{-\infty}^{+\infty} dx f(x) = \int_0^1 dt (f((1-t)/t) + f((-1+t)/t))/t^2. It is then integrated using the QAGS algorithm. The normal 21-point Gauss-Kronrod rule of QAGS is replaced by a 15-point rule, because the transformation can generate an integrable singularity at the origin. In this case a lower-order rule is more efficient. -- Function: int gsl_integration_qagiu (gsl_function * F, double A, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the integral of the function F over the semi-infinite interval (a,+\infty). The integral is mapped onto the semi-open interval (0,1] using the transformation x = a + (1-t)/t, \int_{a}^{+\infty} dx f(x) = \int_0^1 dt f(a + (1-t)/t)/t^2 and then integrated using the QAGS algorithm. -- Function: int gsl_integration_qagil (gsl_function * F, double B, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the integral of the function F over the semi-infinite interval (-\infty,b). The integral is mapped onto the semi-open interval (0,1] using the transformation x = b - (1-t)/t, \int_{-\infty}^{b} dx f(x) = \int_0^1 dt f(b - (1-t)/t)/t^2 and then integrated using the QAGS algorithm.  File: gsl-ref.info, Node: QAWC adaptive integration for Cauchy principal values, Next: QAWS adaptive integration for singular functions, Prev: QAGI adaptive integration on infinite intervals, Up: Numerical Integration 17.7 QAWC adaptive integration for Cauchy principal values ========================================================== -- Function: int gsl_integration_qawc (gsl_function * F, double A, double B, double C, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the Cauchy principal value of the integral of f over (a,b), with a singularity at C, I = \int_a^b dx f(x) / (x - c) The adaptive bisection algorithm of QAG is used, with modifications to ensure that subdivisions do not occur at the singular point x = c. When a subinterval contains the point x = c or is close to it then a special 25-point modified Clenshaw-Curtis rule is used to control the singularity. Further away from the singularity the algorithm uses an ordinary 15-point Gauss-Kronrod integration rule.  File: gsl-ref.info, Node: QAWS adaptive integration for singular functions, Next: QAWO adaptive integration for oscillatory functions, Prev: QAWC adaptive integration for Cauchy principal values, Up: Numerical Integration 17.8 QAWS adaptive integration for singular functions ===================================================== The QAWS algorithm is designed for integrands with algebraic-logarithmic singularities at the end-points of an integration region. In order to work efficiently the algorithm requires a precomputed table of Chebyshev moments. -- Function: gsl_integration_qaws_table * gsl_integration_qaws_table_alloc (double ALPHA, double BETA, int MU, int NU) This function allocates space for a `gsl_integration_qaws_table' struct describing a singular weight function W(x) with the parameters (\alpha, \beta, \mu, \nu), W(x) = (x-a)^alpha (b-x)^beta log^mu (x-a) log^nu (b-x) where \alpha > -1, \beta > -1, and \mu = 0, 1, \nu = 0, 1. The weight function can take four different forms depending on the values of \mu and \nu, W(x) = (x-a)^alpha (b-x)^beta (mu = 0, nu = 0) W(x) = (x-a)^alpha (b-x)^beta log(x-a) (mu = 1, nu = 0) W(x) = (x-a)^alpha (b-x)^beta log(b-x) (mu = 0, nu = 1) W(x) = (x-a)^alpha (b-x)^beta log(x-a) log(b-x) (mu = 1, nu = 1) The singular points (a,b) do not have to be specified until the integral is computed, where they are the endpoints of the integration range. The function returns a pointer to the newly allocated table `gsl_integration_qaws_table' if no errors were detected, and 0 in the case of error. -- Function: int gsl_integration_qaws_table_set (gsl_integration_qaws_table * T, double ALPHA, double BETA, int MU, int NU) This function modifies the parameters (\alpha, \beta, \mu, \nu) of an existing `gsl_integration_qaws_table' struct T. -- Function: void gsl_integration_qaws_table_free (gsl_integration_qaws_table * T) This function frees all the memory associated with the `gsl_integration_qaws_table' struct T. -- Function: int gsl_integration_qaws (gsl_function * F, const double A, const double B, gsl_integration_qaws_table * T, const double EPSABS, const double EPSREL, const size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the integral of the function f(x) over the interval (a,b) with the singular weight function (x-a)^\alpha (b-x)^\beta \log^\mu (x-a) \log^\nu (b-x). The parameters of the weight function (\alpha, \beta, \mu, \nu) are taken from the table T. The integral is, I = \int_a^b dx f(x) (x-a)^alpha (b-x)^beta log^mu (x-a) log^nu (b-x). The adaptive bisection algorithm of QAG is used. When a subinterval contains one of the endpoints then a special 25-point modified Clenshaw-Curtis rule is used to control the singularities. For subintervals which do not include the endpoints an ordinary 15-point Gauss-Kronrod integration rule is used.  File: gsl-ref.info, Node: QAWO adaptive integration for oscillatory functions, Next: QAWF adaptive integration for Fourier integrals, Prev: QAWS adaptive integration for singular functions, Up: Numerical Integration 17.9 QAWO adaptive integration for oscillatory functions ======================================================== The QAWO algorithm is designed for integrands with an oscillatory factor, \sin(\omega x) or \cos(\omega x). In order to work efficiently the algorithm requires a table of Chebyshev moments which must be pre-computed with calls to the functions below. -- Function: gsl_integration_qawo_table * gsl_integration_qawo_table_alloc (double OMEGA, double L, enum gsl_integration_qawo_enum SINE, size_t N) This function allocates space for a `gsl_integration_qawo_table' struct and its associated workspace describing a sine or cosine weight function W(x) with the parameters (\omega, L), W(x) = sin(omega x) W(x) = cos(omega x) The parameter L must be the length of the interval over which the function will be integrated L = b - a. The choice of sine or cosine is made with the parameter SINE which should be chosen from one of the two following symbolic values: GSL_INTEG_COSINE GSL_INTEG_SINE The `gsl_integration_qawo_table' is a table of the trigonometric coefficients required in the integration process. The parameter N determines the number of levels of coefficients that are computed. Each level corresponds to one bisection of the interval L, so that N levels are sufficient for subintervals down to the length L/2^n. The integration routine `gsl_integration_qawo' returns the error `GSL_ETABLE' if the number of levels is insufficient for the requested accuracy. -- Function: int gsl_integration_qawo_table_set (gsl_integration_qawo_table * T, double OMEGA, double L, enum gsl_integration_qawo_enum SINE) This function changes the parameters OMEGA, L and SINE of the existing workspace T. -- Function: int gsl_integration_qawo_table_set_length (gsl_integration_qawo_table * T, double L) This function allows the length parameter L of the workspace T to be changed. -- Function: void gsl_integration_qawo_table_free (gsl_integration_qawo_table * T) This function frees all the memory associated with the workspace T. -- Function: int gsl_integration_qawo (gsl_function * F, const double A, const double EPSABS, const double EPSREL, const size_t LIMIT, gsl_integration_workspace * WORKSPACE, gsl_integration_qawo_table * WF, double * RESULT, double * ABSERR) This function uses an adaptive algorithm to compute the integral of f over (a,b) with the weight function \sin(\omega x) or \cos(\omega x) defined by the table WF, I = \int_a^b dx f(x) sin(omega x) I = \int_a^b dx f(x) cos(omega x) The results are extrapolated using the epsilon-algorithm to accelerate the convergence of the integral. The function returns the final approximation from the extrapolation, RESULT, and an estimate of the absolute error, ABSERR. The subintervals and their results are stored in the memory provided by WORKSPACE. The maximum number of subintervals is given by LIMIT, which may not exceed the allocated size of the workspace. Those subintervals with "large" widths d where d\omega > 4 are computed using a 25-point Clenshaw-Curtis integration rule, which handles the oscillatory behavior. Subintervals with a "small" widths where d\omega < 4 are computed using a 15-point Gauss-Kronrod integration.  File: gsl-ref.info, Node: QAWF adaptive integration for Fourier integrals, Next: CQUAD doubly-adaptive integration, Prev: QAWO adaptive integration for oscillatory functions, Up: Numerical Integration 17.10 QAWF adaptive integration for Fourier integrals ===================================================== -- Function: int gsl_integration_qawf (gsl_function * F, const double A, const double EPSABS, const size_t LIMIT, gsl_integration_workspace * WORKSPACE, gsl_integration_workspace * CYCLE_WORKSPACE, gsl_integration_qawo_table * WF, double * RESULT, double * ABSERR) This function attempts to compute a Fourier integral of the function F over the semi-infinite interval [a,+\infty). I = \int_a^{+\infty} dx f(x) sin(omega x) I = \int_a^{+\infty} dx f(x) cos(omega x) The parameter \omega and choice of \sin or \cos is taken from the table WF (the length L can take any value, since it is overridden by this function to a value appropriate for the Fourier integration). The integral is computed using the QAWO algorithm over each of the subintervals, C_1 = [a, a + c] C_2 = [a + c, a + 2 c] ... = ... C_k = [a + (k-1) c, a + k c] where c = (2 floor(|\omega|) + 1) \pi/|\omega|. The width c is chosen to cover an odd number of periods so that the contributions from the intervals alternate in sign and are monotonically decreasing when F is positive and monotonically decreasing. The sum of this sequence of contributions is accelerated using the epsilon-algorithm. This function works to an overall absolute tolerance of ABSERR. The following strategy is used: on each interval C_k the algorithm tries to achieve the tolerance TOL_k = u_k abserr where u_k = (1 - p)p^{k-1} and p = 9/10. The sum of the geometric series of contributions from each interval gives an overall tolerance of ABSERR. If the integration of a subinterval leads to difficulties then the accuracy requirement for subsequent intervals is relaxed, TOL_k = u_k max(abserr, max_{in - 1], this function obtains the I-th Gauss-Legendre point XI and weight WI on the interval [A,B]. The points and weights are ordered by increasing point value. A function f may be integrated on [A,B] by summing wi * f(xi) over I. -- Function: void gsl_integration_glfixed_table_free (gsl_integration_glfixed_table * T) This function frees the memory associated with the table T.  File: gsl-ref.info, Node: Numerical integration error codes, Next: Numerical integration examples, Prev: Fixed order Gauss-Legendre integration, Up: Numerical Integration 17.13 Error codes ================= In addition to the standard error codes for invalid arguments the functions can return the following values, `GSL_EMAXITER' the maximum number of subdivisions was exceeded. `GSL_EROUND' cannot reach tolerance because of roundoff error, or roundoff error was detected in the extrapolation table. `GSL_ESING' a non-integrable singularity or other bad integrand behavior was found in the integration interval. `GSL_EDIVERGE' the integral is divergent, or too slowly convergent to be integrated numerically.  File: gsl-ref.info, Node: Numerical integration examples, Next: Numerical integration References and Further Reading, Prev: Numerical integration error codes, Up: Numerical Integration 17.14 Examples ============== The integrator `QAGS' will handle a large class of definite integrals. For example, consider the following integral, which has an algebraic-logarithmic singularity at the origin, \int_0^1 x^{-1/2} log(x) dx = -4 The program below computes this integral to a relative accuracy bound of `1e-7'. #include #include #include double f (double x, void * params) { double alpha = *(double *) params; double f = log(alpha*x) / sqrt(x); return f; } int main (void) { gsl_integration_workspace * w = gsl_integration_workspace_alloc (1000); double result, error; double expected = -4.0; double alpha = 1.0; gsl_function F; F.function = &f; F.params = α gsl_integration_qags (&F, 0, 1, 0, 1e-7, 1000, w, &result, &error); printf ("result = % .18f\n", result); printf ("exact result = % .18f\n", expected); printf ("estimated error = % .18f\n", error); printf ("actual error = % .18f\n", result - expected); printf ("intervals = %d\n", w->size); gsl_integration_workspace_free (w); return 0; } The results below show that the desired accuracy is achieved after 8 subdivisions. $ ./a.out result = -3.999999999999973799 exact result = -4.000000000000000000 estimated error = 0.000000000000246025 actual error = 0.000000000000026201 intervals = 8 In fact, the extrapolation procedure used by `QAGS' produces an accuracy of almost twice as many digits. The error estimate returned by the extrapolation procedure is larger than the actual error, giving a margin of safety of one order of magnitude.  File: gsl-ref.info, Node: Numerical integration References and Further Reading, Prev: Numerical integration examples, Up: Numerical Integration 17.15 References and Further Reading ==================================== The following book is the definitive reference for QUADPACK, and was written by the original authors. It provides descriptions of the algorithms, program listings, test programs and examples. It also includes useful advice on numerical integration and many references to the numerical integration literature used in developing QUADPACK. R. Piessens, E. de Doncker-Kapenga, C.W. Ueberhuber, D.K. Kahaner. `QUADPACK A subroutine package for automatic integration' Springer Verlag, 1983. The CQUAD integration algorithm is described in the following paper: P. Gonnet, "Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants", `ACM Transactions on Mathematical Software', Volume 37 (2010), Issue 3, Article 26.  File: gsl-ref.info, Node: Random Number Generation, Next: Quasi-Random Sequences, Prev: Numerical Integration, Up: Top 18 Random Number Generation *************************** The library provides a large collection of random number generators which can be accessed through a uniform interface. Environment variables allow you to select different generators and seeds at runtime, so that you can easily switch between generators without needing to recompile your program. Each instance of a generator keeps track of its own state, allowing the generators to be used in multi-threaded programs. Additional functions are available for transforming uniform random numbers into samples from continuous or discrete probability distributions such as the Gaussian, log-normal or Poisson distributions. These functions are declared in the header file `gsl_rng.h'. * Menu: * General comments on random numbers:: * The Random Number Generator Interface:: * Random number generator initialization:: * Sampling from a random number generator:: * Auxiliary random number generator functions:: * Random number environment variables:: * Copying random number generator state:: * Reading and writing random number generator state:: * Random number generator algorithms:: * Unix random number generators:: * Other random number generators:: * Random Number Generator Performance:: * Random Number Generator Examples:: * Random Number References and Further Reading:: * Random Number Acknowledgements::  File: gsl-ref.info, Node: General comments on random numbers, Next: The Random Number Generator Interface, Up: Random Number Generation 18.1 General comments on random numbers ======================================= In 1988, Park and Miller wrote a paper entitled "Random number generators: good ones are hard to find." [Commun. ACM, 31, 1192-1201]. Fortunately, some excellent random number generators are available, though poor ones are still in common use. You may be happy with the system-supplied random number generator on your computer, but you should be aware that as computers get faster, requirements on random number generators increase. Nowadays, a simulation that calls a random number generator millions of times can often finish before you can make it down the hall to the coffee machine and back. A very nice review of random number generators was written by Pierre L'Ecuyer, as Chapter 4 of the book: Handbook on Simulation, Jerry Banks, ed. (Wiley, 1997). The chapter is available in postscript from L'Ecuyer's ftp site (see references). Knuth's volume on Seminumerical Algorithms (originally published in 1968) devotes 170 pages to random number generators, and has recently been updated in its 3rd edition (1997). It is brilliant, a classic. If you don't own it, you should stop reading right now, run to the nearest bookstore, and buy it. A good random number generator will satisfy both theoretical and statistical properties. Theoretical properties are often hard to obtain (they require real math!), but one prefers a random number generator with a long period, low serial correlation, and a tendency _not_ to "fall mainly on the planes." Statistical tests are performed with numerical simulations. Generally, a random number generator is used to estimate some quantity for which the theory of probability provides an exact answer. Comparison to this exact answer provides a measure of "randomness".  File: gsl-ref.info, Node: The Random Number Generator Interface, Next: Random number generator initialization, Prev: General comments on random numbers, Up: Random Number Generation 18.2 The Random Number Generator Interface ========================================== It is important to remember that a random number generator is not a "real" function like sine or cosine. Unlike real functions, successive calls to a random number generator yield different return values. Of course that is just what you want for a random number generator, but to achieve this effect, the generator must keep track of some kind of "state" variable. Sometimes this state is just an integer (sometimes just the value of the previously generated random number), but often it is more complicated than that and may involve a whole array of numbers, possibly with some indices thrown in. To use the random number generators, you do not need to know the details of what comprises the state, and besides that varies from algorithm to algorithm. The random number generator library uses two special structs, `gsl_rng_type' which holds static information about each type of generator and `gsl_rng' which describes an instance of a generator created from a given `gsl_rng_type'. The functions described in this section are declared in the header file `gsl_rng.h'.  File: gsl-ref.info, Node: Random number generator initialization, Next: Sampling from a random number generator, Prev: The Random Number Generator Interface, Up: Random Number Generation 18.3 Random number generator initialization =========================================== -- Function: gsl_rng * gsl_rng_alloc (const gsl_rng_type * T) This function returns a pointer to a newly-created instance of a random number generator of type T. For example, the following code creates an instance of the Tausworthe generator, gsl_rng * r = gsl_rng_alloc (gsl_rng_taus); If there is insufficient memory to create the generator then the function returns a null pointer and the error handler is invoked with an error code of `GSL_ENOMEM'. The generator is automatically initialized with the default seed, `gsl_rng_default_seed'. This is zero by default but can be changed either directly or by using the environment variable `GSL_RNG_SEED' (*note Random number environment variables::). The details of the available generator types are described later in this chapter. -- Function: void gsl_rng_set (const gsl_rng * R, unsigned long int S) This function initializes (or `seeds') the random number generator. If the generator is seeded with the same value of S on two different runs, the same stream of random numbers will be generated by successive calls to the routines below. If different values of S >= 1 are supplied, then the generated streams of random numbers should be completely different. If the seed S is zero then the standard seed from the original implementation is used instead. For example, the original Fortran source code for the `ranlux' generator used a seed of 314159265, and so choosing S equal to zero reproduces this when using `gsl_rng_ranlux'. When using multiple seeds with the same generator, choose seed values greater than zero to avoid collisions with the default setting. Note that the most generators only accept 32-bit seeds, with higher values being reduced modulo 2^32. For generators with smaller ranges the maximum seed value will typically be lower. -- Function: void gsl_rng_free (gsl_rng * R) This function frees all the memory associated with the generator R.  File: gsl-ref.info, Node: Sampling from a random number generator, Next: Auxiliary random number generator functions, Prev: Random number generator initialization, Up: Random Number Generation 18.4 Sampling from a random number generator ============================================ The following functions return uniformly distributed random numbers, either as integers or double precision floating point numbers. Inline versions of these functions are used when `HAVE_INLINE' is defined. To obtain non-uniform distributions *note Random Number Distributions::. -- Function: unsigned long int gsl_rng_get (const gsl_rng * R) This function returns a random integer from the generator R. The minimum and maximum values depend on the algorithm used, but all integers in the range [MIN,MAX] are equally likely. The values of MIN and MAX can determined using the auxiliary functions `gsl_rng_max (r)' and `gsl_rng_min (r)'. -- Function: double gsl_rng_uniform (const gsl_rng * R) This function returns a double precision floating point number uniformly distributed in the range [0,1). The range includes 0.0 but excludes 1.0. The value is typically obtained by dividing the result of `gsl_rng_get(r)' by `gsl_rng_max(r) + 1.0' in double precision. Some generators compute this ratio internally so that they can provide floating point numbers with more than 32 bits of randomness (the maximum number of bits that can be portably represented in a single `unsigned long int'). -- Function: double gsl_rng_uniform_pos (const gsl_rng * R) This function returns a positive double precision floating point number uniformly distributed in the range (0,1), excluding both 0.0 and 1.0. The number is obtained by sampling the generator with the algorithm of `gsl_rng_uniform' until a non-zero value is obtained. You can use this function if you need to avoid a singularity at 0.0. -- Function: unsigned long int gsl_rng_uniform_int (const gsl_rng * R, unsigned long int N) This function returns a random integer from 0 to n-1 inclusive by scaling down and/or discarding samples from the generator R. All integers in the range [0,n-1] are produced with equal probability. For generators with a non-zero minimum value an offset is applied so that zero is returned with the correct probability. Note that this function is designed for sampling from ranges smaller than the range of the underlying generator. The parameter N must be less than or equal to the range of the generator R. If N is larger than the range of the generator then the function calls the error handler with an error code of `GSL_EINVAL' and returns zero. In particular, this function is not intended for generating the full range of unsigned integer values [0,2^32-1]. Instead choose a generator with the maximal integer range and zero minimum value, such as `gsl_rng_ranlxd1', `gsl_rng_mt19937' or `gsl_rng_taus', and sample it directly using `gsl_rng_get'. The range of each generator can be found using the auxiliary functions described in the next section.  File: gsl-ref.info, Node: Auxiliary random number generator functions, Next: Random number environment variables, Prev: Sampling from a random number generator, Up: Random Number Generation 18.5 Auxiliary random number generator functions ================================================ The following functions provide information about an existing generator. You should use them in preference to hard-coding the generator parameters into your own code. -- Function: const char * gsl_rng_name (const gsl_rng * R) This function returns a pointer to the name of the generator. For example, printf ("r is a '%s' generator\n", gsl_rng_name (r)); would print something like `r is a 'taus' generator'. -- Function: unsigned long int gsl_rng_max (const gsl_rng * R) `gsl_rng_max' returns the largest value that `gsl_rng_get' can return. -- Function: unsigned long int gsl_rng_min (const gsl_rng * R) `gsl_rng_min' returns the smallest value that `gsl_rng_get' can return. Usually this value is zero. There are some generators with algorithms that cannot return zero, and for these generators the minimum value is 1. -- Function: void * gsl_rng_state (const gsl_rng * R) -- Function: size_t gsl_rng_size (const gsl_rng * R) These functions return a pointer to the state of generator R and its size. You can use this information to access the state directly. For example, the following code will write the state of a generator to a stream, void * state = gsl_rng_state (r); size_t n = gsl_rng_size (r); fwrite (state, n, 1, stream); -- Function: const gsl_rng_type ** gsl_rng_types_setup (void) This function returns a pointer to an array of all the available generator types, terminated by a null pointer. The function should be called once at the start of the program, if needed. The following code fragment shows how to iterate over the array of generator types to print the names of the available algorithms, const gsl_rng_type **t, **t0; t0 = gsl_rng_types_setup (); printf ("Available generators:\n"); for (t = t0; *t != 0; t++) { printf ("%s\n", (*t)->name); }  File: gsl-ref.info, Node: Random number environment variables, Next: Copying random number generator state, Prev: Auxiliary random number generator functions, Up: Random Number Generation 18.6 Random number environment variables ======================================== The library allows you to choose a default generator and seed from the environment variables `GSL_RNG_TYPE' and `GSL_RNG_SEED' and the function `gsl_rng_env_setup'. This makes it easy try out different generators and seeds without having to recompile your program. -- Function: const gsl_rng_type * gsl_rng_env_setup (void) This function reads the environment variables `GSL_RNG_TYPE' and `GSL_RNG_SEED' and uses their values to set the corresponding library variables `gsl_rng_default' and `gsl_rng_default_seed'. These global variables are defined as follows, extern const gsl_rng_type *gsl_rng_default extern unsigned long int gsl_rng_default_seed The environment variable `GSL_RNG_TYPE' should be the name of a generator, such as `taus' or `mt19937'. The environment variable `GSL_RNG_SEED' should contain the desired seed value. It is converted to an `unsigned long int' using the C library function `strtoul'. If you don't specify a generator for `GSL_RNG_TYPE' then `gsl_rng_mt19937' is used as the default. The initial value of `gsl_rng_default_seed' is zero. Here is a short program which shows how to create a global generator using the environment variables `GSL_RNG_TYPE' and `GSL_RNG_SEED', #include #include gsl_rng * r; /* global generator */ int main (void) { const gsl_rng_type * T; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); printf ("generator type: %s\n", gsl_rng_name (r)); printf ("seed = %lu\n", gsl_rng_default_seed); printf ("first value = %lu\n", gsl_rng_get (r)); gsl_rng_free (r); return 0; } Running the program without any environment variables uses the initial defaults, an `mt19937' generator with a seed of 0, $ ./a.out generator type: mt19937 seed = 0 first value = 4293858116 By setting the two variables on the command line we can change the default generator and the seed, $ GSL_RNG_TYPE="taus" GSL_RNG_SEED=123 ./a.out GSL_RNG_TYPE=taus GSL_RNG_SEED=123 generator type: taus seed = 123 first value = 2720986350  File: gsl-ref.info, Node: Copying random number generator state, Next: Reading and writing random number generator state, Prev: Random number environment variables, Up: Random Number Generation 18.7 Copying random number generator state ========================================== The above methods do not expose the random number `state' which changes from call to call. It is often useful to be able to save and restore the state. To permit these practices, a few somewhat more advanced functions are supplied. These include: -- Function: int gsl_rng_memcpy (gsl_rng * DEST, const gsl_rng * SRC) This function copies the random number generator SRC into the pre-existing generator DEST, making DEST into an exact copy of SRC. The two generators must be of the same type. -- Function: gsl_rng * gsl_rng_clone (const gsl_rng * R) This function returns a pointer to a newly created generator which is an exact copy of the generator R.  File: gsl-ref.info, Node: Reading and writing random number generator state, Next: Random number generator algorithms, Prev: Copying random number generator state, Up: Random Number Generation 18.8 Reading and writing random number generator state ====================================================== The library provides functions for reading and writing the random number state to a file as binary data. -- Function: int gsl_rng_fwrite (FILE * STREAM, const gsl_rng * R) This function writes the random number state of the random number generator R to the stream STREAM in binary format. The return value is 0 for success and `GSL_EFAILED' if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_rng_fread (FILE * STREAM, gsl_rng * R) This function reads the random number state into the random number generator R from the open stream STREAM in binary format. The random number generator R must be preinitialized with the correct random number generator type since type information is not saved. The return value is 0 for success and `GSL_EFAILED' if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture.  File: gsl-ref.info, Node: Random number generator algorithms, Next: Unix random number generators, Prev: Reading and writing random number generator state, Up: Random Number Generation 18.9 Random number generator algorithms ======================================= The functions described above make no reference to the actual algorithm used. This is deliberate so that you can switch algorithms without having to change any of your application source code. The library provides a large number of generators of different types, including simulation quality generators, generators provided for compatibility with other libraries and historical generators from the past. The following generators are recommended for use in simulation. They have extremely long periods, low correlation and pass most statistical tests. For the most reliable source of uncorrelated numbers, the second-generation RANLUX generators have the strongest proof of randomness. -- Generator: gsl_rng_mt19937 The MT19937 generator of Makoto Matsumoto and Takuji Nishimura is a variant of the twisted generalized feedback shift-register algorithm, and is known as the "Mersenne Twister" generator. It has a Mersenne prime period of 2^19937 - 1 (about 10^6000) and is equi-distributed in 623 dimensions. It has passed the DIEHARD statistical tests. It uses 624 words of state per generator and is comparable in speed to the other generators. The original generator used a default seed of 4357 and choosing S equal to zero in `gsl_rng_set' reproduces this. Later versions switched to 5489 as the default seed, you can choose this explicitly via `gsl_rng_set' instead if you require it. For more information see, Makoto Matsumoto and Takuji Nishimura, "Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator". `ACM Transactions on Modeling and Computer Simulation', Vol. 8, No. 1 (Jan. 1998), Pages 3-30 The generator `gsl_rng_mt19937' uses the second revision of the seeding procedure published by the two authors above in 2002. The original seeding procedures could cause spurious artifacts for some seed values. They are still available through the alternative generators `gsl_rng_mt19937_1999' and `gsl_rng_mt19937_1998'. -- Generator: gsl_rng_ranlxs0 -- Generator: gsl_rng_ranlxs1 -- Generator: gsl_rng_ranlxs2 The generator `ranlxs0' is a second-generation version of the RANLUX algorithm of Lu"scher, which produces "luxury random numbers". This generator provides single precision output (24 bits) at three luxury levels `ranlxs0', `ranlxs1' and `ranlxs2', in increasing order of strength. It uses double-precision floating point arithmetic internally and can be significantly faster than the integer version of `ranlux', particularly on 64-bit architectures. The period of the generator is about 10^171. The algorithm has mathematically proven properties and can provide truly decorrelated numbers at a known level of randomness. The higher luxury levels provide increased decorrelation between samples as an additional safety margin. Note that the range of allowed seeds for this generator is [0,2^31-1]. Higher seed values are wrapped modulo 2^31. -- Generator: gsl_rng_ranlxd1 -- Generator: gsl_rng_ranlxd2 These generators produce double precision output (48 bits) from the RANLXS generator. The library provides two luxury levels `ranlxd1' and `ranlxd2', in increasing order of strength. -- Generator: gsl_rng_ranlux -- Generator: gsl_rng_ranlux389 The `ranlux' generator is an implementation of the original algorithm developed by Lu"scher. It uses a lagged-fibonacci-with-skipping algorithm to produce "luxury random numbers". It is a 24-bit generator, originally designed for single-precision IEEE floating point numbers. This implementation is based on integer arithmetic, while the second-generation versions RANLXS and RANLXD described above provide floating-point implementations which will be faster on many platforms. The period of the generator is about 10^171. The algorithm has mathematically proven properties and it can provide truly decorrelated numbers at a known level of randomness. The default level of decorrelation recommended by Lu"scher is provided by `gsl_rng_ranlux', while `gsl_rng_ranlux389' gives the highest level of randomness, with all 24 bits decorrelated. Both types of generator use 24 words of state per generator. For more information see, M. Lu"scher, "A portable high-quality random number generator for lattice field theory calculations", `Computer Physics Communications', 79 (1994) 100-110. F. James, "RANLUX: A Fortran implementation of the high-quality pseudo-random number generator of Lu"scher", `Computer Physics Communications', 79 (1994) 111-114 -- Generator: gsl_rng_cmrg This is a combined multiple recursive generator by L'Ecuyer. Its sequence is, z_n = (x_n - y_n) mod m_1 where the two underlying generators x_n and y_n are, x_n = (a_1 x_{n-1} + a_2 x_{n-2} + a_3 x_{n-3}) mod m_1 y_n = (b_1 y_{n-1} + b_2 y_{n-2} + b_3 y_{n-3}) mod m_2 with coefficients a_1 = 0, a_2 = 63308, a_3 = -183326, b_1 = 86098, b_2 = 0, b_3 = -539608, and moduli m_1 = 2^31 - 1 = 2147483647 and m_2 = 2145483479. The period of this generator is lcm(m_1^3-1, m_2^3-1), which is approximately 2^185 (about 10^56). It uses 6 words of state per generator. For more information see, P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators", `Operations Research', 44, 5 (1996), 816-822. -- Generator: gsl_rng_mrg This is a fifth-order multiple recursive generator by L'Ecuyer, Blouin and Coutre. Its sequence is, x_n = (a_1 x_{n-1} + a_5 x_{n-5}) mod m with a_1 = 107374182, a_2 = a_3 = a_4 = 0, a_5 = 104480 and m = 2^31 - 1. The period of this generator is about 10^46. It uses 5 words of state per generator. More information can be found in the following paper, P. L'Ecuyer, F. Blouin, and R. Coutre, "A search for good multiple recursive random number generators", `ACM Transactions on Modeling and Computer Simulation' 3, 87-98 (1993). -- Generator: gsl_rng_taus -- Generator: gsl_rng_taus2 This is a maximally equidistributed combined Tausworthe generator by L'Ecuyer. The sequence is, x_n = (s1_n ^^ s2_n ^^ s3_n) where, s1_{n+1} = (((s1_n&4294967294)<<12)^^(((s1_n<<13)^^s1_n)>>19)) s2_{n+1} = (((s2_n&4294967288)<< 4)^^(((s2_n<< 2)^^s2_n)>>25)) s3_{n+1} = (((s3_n&4294967280)<<17)^^(((s3_n<< 3)^^s3_n)>>11)) computed modulo 2^32. In the formulas above ^^ denotes "exclusive-or". Note that the algorithm relies on the properties of 32-bit unsigned integers and has been implemented using a bitmask of `0xFFFFFFFF' to make it work on 64 bit machines. The period of this generator is 2^88 (about 10^26). It uses 3 words of state per generator. For more information see, P. L'Ecuyer, "Maximally Equidistributed Combined Tausworthe Generators", `Mathematics of Computation', 65, 213 (1996), 203-213. The generator `gsl_rng_taus2' uses the same algorithm as `gsl_rng_taus' but with an improved seeding procedure described in the paper, P. L'Ecuyer, "Tables of Maximally Equidistributed Combined LFSR Generators", `Mathematics of Computation', 68, 225 (1999), 261-269 The generator `gsl_rng_taus2' should now be used in preference to `gsl_rng_taus'. -- Generator: gsl_rng_gfsr4 The `gfsr4' generator is like a lagged-fibonacci generator, and produces each number as an `xor''d sum of four previous values. r_n = r_{n-A} ^^ r_{n-B} ^^ r_{n-C} ^^ r_{n-D} Ziff (ref below) notes that "it is now widely known" that two-tap registers (such as R250, which is described below) have serious flaws, the most obvious one being the three-point correlation that comes from the definition of the generator. Nice mathematical properties can be derived for GFSR's, and numerics bears out the claim that 4-tap GFSR's with appropriately chosen offsets are as random as can be measured, using the author's test. This implementation uses the values suggested the example on p392 of Ziff's article: A=471, B=1586, C=6988, D=9689. If the offsets are appropriately chosen (such as the one ones in this implementation), then the sequence is said to be maximal; that means that the period is 2^D - 1, where D is the longest lag. (It is one less than 2^D because it is not permitted to have all zeros in the `ra[]' array.) For this implementation with D=9689 that works out to about 10^2917. Note that the implementation of this generator using a 32-bit integer amounts to 32 parallel implementations of one-bit generators. One consequence of this is that the period of this 32-bit generator is the same as for the one-bit generator. Moreover, this independence means that all 32-bit patterns are equally likely, and in particular that 0 is an allowed random value. (We are grateful to Heiko Bauke for clarifying for us these properties of GFSR random number generators.) For more information see, Robert M. Ziff, "Four-tap shift-register-sequence random-number generators", `Computers in Physics', 12(4), Jul/Aug 1998, pp 385-392.  File: gsl-ref.info, Node: Unix random number generators, Next: Other random number generators, Prev: Random number generator algorithms, Up: Random Number Generation 18.10 Unix random number generators =================================== The standard Unix random number generators `rand', `random' and `rand48' are provided as part of GSL. Although these generators are widely available individually often they aren't all available on the same platform. This makes it difficult to write portable code using them and so we have included the complete set of Unix generators in GSL for convenience. Note that these generators don't produce high-quality randomness and aren't suitable for work requiring accurate statistics. However, if you won't be measuring statistical quantities and just want to introduce some variation into your program then these generators are quite acceptable. -- Generator: gsl_rng_rand This is the BSD `rand' generator. Its sequence is x_{n+1} = (a x_n + c) mod m with a = 1103515245, c = 12345 and m = 2^31. The seed specifies the initial value, x_1. The period of this generator is 2^31, and it uses 1 word of storage per generator. -- Generator: gsl_rng_random_bsd -- Generator: gsl_rng_random_libc5 -- Generator: gsl_rng_random_glibc2 These generators implement the `random' family of functions, a set of linear feedback shift register generators originally used in BSD Unix. There are several versions of `random' in use today: the original BSD version (e.g. on SunOS4), a libc5 version (found on older GNU/Linux systems) and a glibc2 version. Each version uses a different seeding procedure, and thus produces different sequences. The original BSD routines accepted a variable length buffer for the generator state, with longer buffers providing higher-quality randomness. The `random' function implemented algorithms for buffer lengths of 8, 32, 64, 128 and 256 bytes, and the algorithm with the largest length that would fit into the user-supplied buffer was used. To support these algorithms additional generators are available with the following names, gsl_rng_random8_bsd gsl_rng_random32_bsd gsl_rng_random64_bsd gsl_rng_random128_bsd gsl_rng_random256_bsd where the numeric suffix indicates the buffer length. The original BSD `random' function used a 128-byte default buffer and so `gsl_rng_random_bsd' has been made equivalent to `gsl_rng_random128_bsd'. Corresponding versions of the `libc5' and `glibc2' generators are also available, with the names `gsl_rng_random8_libc5', `gsl_rng_random8_glibc2', etc. -- Generator: gsl_rng_rand48 This is the Unix `rand48' generator. Its sequence is x_{n+1} = (a x_n + c) mod m defined on 48-bit unsigned integers with a = 25214903917, c = 11 and m = 2^48. The seed specifies the upper 32 bits of the initial value, x_1, with the lower 16 bits set to `0x330E'. The function `gsl_rng_get' returns the upper 32 bits from each term of the sequence. This does not have a direct parallel in the original `rand48' functions, but forcing the result to type `long int' reproduces the output of `mrand48'. The function `gsl_rng_uniform' uses the full 48 bits of internal state to return the double precision number x_n/m, which is equivalent to the function `drand48'. Note that some versions of the GNU C Library contained a bug in `mrand48' function which caused it to produce different results (only the lower 16-bits of the return value were set).  File: gsl-ref.info, Node: Other random number generators, Next: Random Number Generator Performance, Prev: Unix random number generators, Up: Random Number Generation 18.11 Other random number generators ==================================== The generators in this section are provided for compatibility with existing libraries. If you are converting an existing program to use GSL then you can select these generators to check your new implementation against the original one, using the same random number generator. After verifying that your new program reproduces the original results you can then switch to a higher-quality generator. Note that most of the generators in this section are based on single linear congruence relations, which are the least sophisticated type of generator. In particular, linear congruences have poor properties when used with a non-prime modulus, as several of these routines do (e.g. with a power of two modulus, 2^31 or 2^32). This leads to periodicity in the least significant bits of each number, with only the higher bits having any randomness. Thus if you want to produce a random bitstream it is best to avoid using the least significant bits. -- Generator: gsl_rng_ranf This is the CRAY random number generator `RANF'. Its sequence is x_{n+1} = (a x_n) mod m defined on 48-bit unsigned integers with a = 44485709377909 and m = 2^48. The seed specifies the lower 32 bits of the initial value, x_1, with the lowest bit set to prevent the seed taking an even value. The upper 16 bits of x_1 are set to 0. A consequence of this procedure is that the pairs of seeds 2 and 3, 4 and 5, etc. produce the same sequences. The generator compatible with the CRAY MATHLIB routine RANF. It produces double precision floating point numbers which should be identical to those from the original RANF. There is a subtlety in the implementation of the seeding. The initial state is reversed through one step, by multiplying by the modular inverse of a mod m. This is done for compatibility with the original CRAY implementation. Note that you can only seed the generator with integers up to 2^32, while the original CRAY implementation uses non-portable wide integers which can cover all 2^48 states of the generator. The function `gsl_rng_get' returns the upper 32 bits from each term of the sequence. The function `gsl_rng_uniform' uses the full 48 bits to return the double precision number x_n/m. The period of this generator is 2^46. -- Generator: gsl_rng_ranmar This is the RANMAR lagged-fibonacci generator of Marsaglia, Zaman and Tsang. It is a 24-bit generator, originally designed for single-precision IEEE floating point numbers. It was included in the CERNLIB high-energy physics library. -- Generator: gsl_rng_r250 This is the shift-register generator of Kirkpatrick and Stoll. The sequence is based on the recurrence x_n = x_{n-103} ^^ x_{n-250} where ^^ denotes "exclusive-or", defined on 32-bit words. The period of this generator is about 2^250 and it uses 250 words of state per generator. For more information see, S. Kirkpatrick and E. Stoll, "A very fast shift-register sequence random number generator", `Journal of Computational Physics', 40, 517-526 (1981) -- Generator: gsl_rng_tt800 This is an earlier version of the twisted generalized feedback shift-register generator, and has been superseded by the development of MT19937. However, it is still an acceptable generator in its own right. It has a period of 2^800 and uses 33 words of storage per generator. For more information see, Makoto Matsumoto and Yoshiharu Kurita, "Twisted GFSR Generators II", `ACM Transactions on Modelling and Computer Simulation', Vol. 4, No. 3, 1994, pages 254-266. -- Generator: gsl_rng_vax This is the VAX generator `MTH$RANDOM'. Its sequence is, x_{n+1} = (a x_n + c) mod m with a = 69069, c = 1 and m = 2^32. The seed specifies the initial value, x_1. The period of this generator is 2^32 and it uses 1 word of storage per generator. -- Generator: gsl_rng_transputer This is the random number generator from the INMOS Transputer Development system. Its sequence is, x_{n+1} = (a x_n) mod m with a = 1664525 and m = 2^32. The seed specifies the initial value, x_1. -- Generator: gsl_rng_randu This is the IBM `RANDU' generator. Its sequence is x_{n+1} = (a x_n) mod m with a = 65539 and m = 2^31. The seed specifies the initial value, x_1. The period of this generator was only 2^29. It has become a textbook example of a poor generator. -- Generator: gsl_rng_minstd This is Park and Miller's "minimal standard" MINSTD generator, a simple linear congruence which takes care to avoid the major pitfalls of such algorithms. Its sequence is, x_{n+1} = (a x_n) mod m with a = 16807 and m = 2^31 - 1 = 2147483647. The seed specifies the initial value, x_1. The period of this generator is about 2^31. This generator is used in the IMSL Library (subroutine RNUN) and in MATLAB (the RAND function). It is also sometimes known by the acronym "GGL" (I'm not sure what that stands for). For more information see, Park and Miller, "Random Number Generators: Good ones are hard to find", `Communications of the ACM', October 1988, Volume 31, No 10, pages 1192-1201. -- Generator: gsl_rng_uni -- Generator: gsl_rng_uni32 This is a reimplementation of the 16-bit SLATEC random number generator RUNIF. A generalization of the generator to 32 bits is provided by `gsl_rng_uni32'. The original source code is available from NETLIB. -- Generator: gsl_rng_slatec This is the SLATEC random number generator RAND. It is ancient. The original source code is available from NETLIB. -- Generator: gsl_rng_zuf This is the ZUFALL lagged Fibonacci series generator of Peterson. Its sequence is, t = u_{n-273} + u_{n-607} u_n = t - floor(t) The original source code is available from NETLIB. For more information see, W. Petersen, "Lagged Fibonacci Random Number Generators for the NEC SX-3", `International Journal of High Speed Computing' (1994). -- Generator: gsl_rng_knuthran2 This is a second-order multiple recursive generator described by Knuth in `Seminumerical Algorithms', 3rd Ed., page 108. Its sequence is, x_n = (a_1 x_{n-1} + a_2 x_{n-2}) mod m with a_1 = 271828183, a_2 = 314159269, and m = 2^31 - 1. -- Generator: gsl_rng_knuthran2002 -- Generator: gsl_rng_knuthran This is a second-order multiple recursive generator described by Knuth in `Seminumerical Algorithms', 3rd Ed., Section 3.6. Knuth provides its C code. The updated routine `gsl_rng_knuthran2002' is from the revised 9th printing and corrects some weaknesses in the earlier version, which is implemented as `gsl_rng_knuthran'. -- Generator: gsl_rng_borosh13 -- Generator: gsl_rng_fishman18 -- Generator: gsl_rng_fishman20 -- Generator: gsl_rng_lecuyer21 -- Generator: gsl_rng_waterman14 These multiplicative generators are taken from Knuth's `Seminumerical Algorithms', 3rd Ed., pages 106-108. Their sequence is, x_{n+1} = (a x_n) mod m where the seed specifies the initial value, x_1. The parameters a and m are as follows, Borosh-Niederreiter: a = 1812433253, m = 2^32, Fishman18: a = 62089911, m = 2^31 - 1, Fishman20: a = 48271, m = 2^31 - 1, L'Ecuyer: a = 40692, m = 2^31 - 249, Waterman: a = 1566083941, m = 2^32. -- Generator: gsl_rng_fishman2x This is the L'Ecuyer-Fishman random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., page 108. Its sequence is, z_{n+1} = (x_n - y_n) mod m with m = 2^31 - 1. x_n and y_n are given by the `fishman20' and `lecuyer21' algorithms. The seed specifies the initial value, x_1. -- Generator: gsl_rng_coveyou This is the Coveyou random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., Section 3.2.2. Its sequence is, x_{n+1} = (x_n (x_n + 1)) mod m with m = 2^32. The seed specifies the initial value, x_1.  File: gsl-ref.info, Node: Random Number Generator Performance, Next: Random Number Generator Examples, Prev: Other random number generators, Up: Random Number Generation 18.12 Performance ================= The following table shows the relative performance of a selection the available random number generators. The fastest simulation quality generators are `taus', `gfsr4' and `mt19937'. The generators which offer the best mathematically-proven quality are those based on the RANLUX algorithm. 1754 k ints/sec, 870 k doubles/sec, taus 1613 k ints/sec, 855 k doubles/sec, gfsr4 1370 k ints/sec, 769 k doubles/sec, mt19937 565 k ints/sec, 571 k doubles/sec, ranlxs0 400 k ints/sec, 405 k doubles/sec, ranlxs1 490 k ints/sec, 389 k doubles/sec, mrg 407 k ints/sec, 297 k doubles/sec, ranlux 243 k ints/sec, 254 k doubles/sec, ranlxd1 251 k ints/sec, 253 k doubles/sec, ranlxs2 238 k ints/sec, 215 k doubles/sec, cmrg 247 k ints/sec, 198 k doubles/sec, ranlux389 141 k ints/sec, 140 k doubles/sec, ranlxd2  File: gsl-ref.info, Node: Random Number Generator Examples, Next: Random Number References and Further Reading, Prev: Random Number Generator Performance, Up: Random Number Generation 18.13 Examples ============== The following program demonstrates the use of a random number generator to produce uniform random numbers in the range [0.0, 1.0), #include #include int main (void) { const gsl_rng_type * T; gsl_rng * r; int i, n = 10; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); for (i = 0; i < n; i++) { double u = gsl_rng_uniform (r); printf ("%.5f\n", u); } gsl_rng_free (r); return 0; } Here is the output of the program, $ ./a.out 0.99974 0.16291 0.28262 0.94720 0.23166 0.48497 0.95748 0.74431 0.54004 0.73995 The numbers depend on the seed used by the generator. The default seed can be changed with the `GSL_RNG_SEED' environment variable to produce a different stream of numbers. The generator itself can be changed using the environment variable `GSL_RNG_TYPE'. Here is the output of the program using a seed value of 123 and the multiple-recursive generator `mrg', $ GSL_RNG_SEED=123 GSL_RNG_TYPE=mrg ./a.out GSL_RNG_TYPE=mrg GSL_RNG_SEED=123 0.33050 0.86631 0.32982 0.67620 0.53391 0.06457 0.16847 0.70229 0.04371 0.86374  File: gsl-ref.info, Node: Random Number References and Further Reading, Next: Random Number Acknowledgements, Prev: Random Number Generator Examples, Up: Random Number Generation 18.14 References and Further Reading ==================================== The subject of random number generation and testing is reviewed extensively in Knuth's `Seminumerical Algorithms'. Donald E. Knuth, `The Art of Computer Programming: Seminumerical Algorithms' (Vol 2, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896842. Further information is available in the review paper written by Pierre L'Ecuyer, P. L'Ecuyer, "Random Number Generation", Chapter 4 of the Handbook on Simulation, Jerry Banks Ed., Wiley, 1998, 93-137. `http://www.iro.umontreal.ca/~lecuyer/papers.html' in the file `handsim.ps'. The source code for the DIEHARD random number generator tests is also available online, `DIEHARD source code' G. Marsaglia, `http://stat.fsu.edu/pub/diehard/' A comprehensive set of random number generator tests is available from NIST, NIST Special Publication 800-22, "A Statistical Test Suite for the Validation of Random Number Generators and Pseudo Random Number Generators for Cryptographic Applications". `http://csrc.nist.gov/rng/'  File: gsl-ref.info, Node: Random Number Acknowledgements, Prev: Random Number References and Further Reading, Up: Random Number Generation 18.15 Acknowledgements ====================== Thanks to Makoto Matsumoto, Takuji Nishimura and Yoshiharu Kurita for making the source code to their generators (MT19937, MM&TN; TT800, MM&YK) available under the GNU General Public License. Thanks to Martin Lu"scher for providing notes and source code for the RANLXS and RANLXD generators.  File: gsl-ref.info, Node: Quasi-Random Sequences, Next: Random Number Distributions, Prev: Random Number Generation, Up: Top 19 Quasi-Random Sequences ************************* This chapter describes functions for generating quasi-random sequences in arbitrary dimensions. A quasi-random sequence progressively covers a d-dimensional space with a set of points that are uniformly distributed. Quasi-random sequences are also known as low-discrepancy sequences. The quasi-random sequence generators use an interface that is similar to the interface for random number generators, except that seeding is not required--each generator produces a single sequence. The functions described in this section are declared in the header file `gsl_qrng.h'. * Menu: * Quasi-random number generator initialization:: * Sampling from a quasi-random number generator:: * Auxiliary quasi-random number generator functions:: * Saving and resorting quasi-random number generator state:: * Quasi-random number generator algorithms:: * Quasi-random number generator examples:: * Quasi-random number references::  File: gsl-ref.info, Node: Quasi-random number generator initialization, Next: Sampling from a quasi-random number generator, Up: Quasi-Random Sequences 19.1 Quasi-random number generator initialization ================================================= -- Function: gsl_qrng * gsl_qrng_alloc (const gsl_qrng_type * T, unsigned int D) This function returns a pointer to a newly-created instance of a quasi-random sequence generator of type T and dimension D. If there is insufficient memory to create the generator then the function returns a null pointer and the error handler is invoked with an error code of `GSL_ENOMEM'. -- Function: void gsl_qrng_free (gsl_qrng * Q) This function frees all the memory associated with the generator Q. -- Function: void gsl_qrng_init (gsl_qrng * Q) This function reinitializes the generator Q to its starting point. Note that quasi-random sequences do not use a seed and always produce the same set of values.  File: gsl-ref.info, Node: Sampling from a quasi-random number generator, Next: Auxiliary quasi-random number generator functions, Prev: Quasi-random number generator initialization, Up: Quasi-Random Sequences 19.2 Sampling from a quasi-random number generator ================================================== -- Function: int gsl_qrng_get (const gsl_qrng * Q, double X[]) This function stores the next point from the sequence generator Q in the array X. The space available for X must match the dimension of the generator. The point X will lie in the range 0 < x_i < 1 for each x_i. An inline version of this function is used when `HAVE_INLINE' is defined.  File: gsl-ref.info, Node: Auxiliary quasi-random number generator functions, Next: Saving and resorting quasi-random number generator state, Prev: Sampling from a quasi-random number generator, Up: Quasi-Random Sequences 19.3 Auxiliary quasi-random number generator functions ====================================================== -- Function: const char * gsl_qrng_name (const gsl_qrng * Q) This function returns a pointer to the name of the generator. -- Function: size_t gsl_qrng_size (const gsl_qrng * Q) -- Function: void * gsl_qrng_state (const gsl_qrng * Q) These functions return a pointer to the state of generator R and its size. You can use this information to access the state directly. For example, the following code will write the state of a generator to a stream, void * state = gsl_qrng_state (q); size_t n = gsl_qrng_size (q); fwrite (state, n, 1, stream);  File: gsl-ref.info, Node: Saving and resorting quasi-random number generator state, Next: Quasi-random number generator algorithms, Prev: Auxiliary quasi-random number generator functions, Up: Quasi-Random Sequences 19.4 Saving and resorting quasi-random number generator state ============================================================= -- Function: int gsl_qrng_memcpy (gsl_qrng * DEST, const gsl_qrng * SRC) This function copies the quasi-random sequence generator SRC into the pre-existing generator DEST, making DEST into an exact copy of SRC. The two generators must be of the same type. -- Function: gsl_qrng * gsl_qrng_clone (const gsl_qrng * Q) This function returns a pointer to a newly created generator which is an exact copy of the generator Q.  File: gsl-ref.info, Node: Quasi-random number generator algorithms, Next: Quasi-random number generator examples, Prev: Saving and resorting quasi-random number generator state, Up: Quasi-Random Sequences 19.5 Quasi-random number generator algorithms ============================================= The following quasi-random sequence algorithms are available, -- Generator: gsl_qrng_niederreiter_2 This generator uses the algorithm described in Bratley, Fox, Niederreiter, `ACM Trans. Model. Comp. Sim.' 2, 195 (1992). It is valid up to 12 dimensions. -- Generator: gsl_qrng_sobol This generator uses the Sobol sequence described in Antonov, Saleev, `USSR Comput. Maths. Math. Phys.' 19, 252 (1980). It is valid up to 40 dimensions. -- Generator: gsl_qrng_halton -- Generator: gsl_qrng_reversehalton These generators use the Halton and reverse Halton sequences described in J.H. Halton, `Numerische Mathematik' 2, 84-90 (1960) and B. Vandewoestyne and R. Cools `Computational and Applied Mathematics' 189, 1&2, 341-361 (2006). They are valid up to 1229 dimensions.  File: gsl-ref.info, Node: Quasi-random number generator examples, Next: Quasi-random number references, Prev: Quasi-random number generator algorithms, Up: Quasi-Random Sequences 19.6 Examples ============= The following program prints the first 1024 points of the 2-dimensional Sobol sequence. #include #include int main (void) { int i; gsl_qrng * q = gsl_qrng_alloc (gsl_qrng_sobol, 2); for (i = 0; i < 1024; i++) { double v[2]; gsl_qrng_get (q, v); printf ("%.5f %.5f\n", v[0], v[1]); } gsl_qrng_free (q); return 0; } Here is the output from the program, $ ./a.out 0.50000 0.50000 0.75000 0.25000 0.25000 0.75000 0.37500 0.37500 0.87500 0.87500 0.62500 0.12500 0.12500 0.62500 .... It can be seen that successive points progressively fill-in the spaces between previous points.  File: gsl-ref.info, Node: Quasi-random number references, Prev: Quasi-random number generator examples, Up: Quasi-Random Sequences 19.7 References =============== The implementations of the quasi-random sequence routines are based on the algorithms described in the following paper, P. Bratley and B.L. Fox and H. Niederreiter, "Algorithm 738: Programs to Generate Niederreiter's Low-discrepancy Sequences", `ACM Transactions on Mathematical Software', Vol. 20, No. 4, December, 1994, p. 494-495.  File: gsl-ref.info, Node: Random Number Distributions, Next: Statistics, Prev: Quasi-Random Sequences, Up: Top 20 Random Number Distributions ****************************** This chapter describes functions for generating random variates and computing their probability distributions. Samples from the distributions described in this chapter can be obtained using any of the random number generators in the library as an underlying source of randomness. In the simplest cases a non-uniform distribution can be obtained analytically from the uniform distribution of a random number generator by applying an appropriate transformation. This method uses one call to the random number generator. More complicated distributions are created by the "acceptance-rejection" method, which compares the desired distribution against a distribution which is similar and known analytically. This usually requires several samples from the generator. The library also provides cumulative distribution functions and inverse cumulative distribution functions, sometimes referred to as quantile functions. The cumulative distribution functions and their inverses are computed separately for the upper and lower tails of the distribution, allowing full accuracy to be retained for small results. The functions for random variates and probability density functions described in this section are declared in `gsl_randist.h'. The corresponding cumulative distribution functions are declared in `gsl_cdf.h'. Note that the discrete random variate functions always return a value of type `unsigned int', and on most platforms this has a maximum value of 2^32-1 ~=~ 4.29e9. They should only be called with a safe range of parameters (where there is a negligible probability of a variate exceeding this limit) to prevent incorrect results due to overflow. * Menu: * Random Number Distribution Introduction:: * The Gaussian Distribution:: * The Gaussian Tail Distribution:: * The Bivariate Gaussian Distribution:: * The Exponential Distribution:: * The Laplace Distribution:: * The Exponential Power Distribution:: * The Cauchy Distribution:: * The Rayleigh Distribution:: * The Rayleigh Tail Distribution:: * The Landau Distribution:: * The Levy alpha-Stable Distributions:: * The Levy skew alpha-Stable Distribution:: * The Gamma Distribution:: * The Flat (Uniform) Distribution:: * The Lognormal Distribution:: * The Chi-squared Distribution:: * The F-distribution:: * The t-distribution:: * The Beta Distribution:: * The Logistic Distribution:: * The Pareto Distribution:: * Spherical Vector Distributions:: * The Weibull Distribution:: * The Type-1 Gumbel Distribution:: * The Type-2 Gumbel Distribution:: * The Dirichlet Distribution:: * General Discrete Distributions:: * The Poisson Distribution:: * The Bernoulli Distribution:: * The Binomial Distribution:: * The Multinomial Distribution:: * The Negative Binomial Distribution:: * The Pascal Distribution:: * The Geometric Distribution:: * The Hypergeometric Distribution:: * The Logarithmic Distribution:: * Shuffling and Sampling:: * Random Number Distribution Examples:: * Random Number Distribution References and Further Reading::  File: gsl-ref.info, Node: Random Number Distribution Introduction, Next: The Gaussian Distribution, Up: Random Number Distributions 20.1 Introduction ================= Continuous random number distributions are defined by a probability density function, p(x), such that the probability of x occurring in the infinitesimal range x to x+dx is p dx. The cumulative distribution function for the lower tail P(x) is defined by the integral, P(x) = \int_{-\infty}^{x} dx' p(x') and gives the probability of a variate taking a value less than x. The cumulative distribution function for the upper tail Q(x) is defined by the integral, Q(x) = \int_{x}^{+\infty} dx' p(x') and gives the probability of a variate taking a value greater than x. The upper and lower cumulative distribution functions are related by P(x) + Q(x) = 1 and satisfy 0 <= P(x) <= 1, 0 <= Q(x) <= 1. The inverse cumulative distributions, x=P^{-1}(P) and x=Q^{-1}(Q) give the values of x which correspond to a specific value of P or Q. They can be used to find confidence limits from probability values. For discrete distributions the probability of sampling the integer value k is given by p(k), where \sum_k p(k) = 1. The cumulative distribution for the lower tail P(k) of a discrete distribution is defined as, P(k) = \sum_{i <= k} p(i) where the sum is over the allowed range of the distribution less than or equal to k. The cumulative distribution for the upper tail of a discrete distribution Q(k) is defined as Q(k) = \sum_{i > k} p(i) giving the sum of probabilities for all values greater than k. These two definitions satisfy the identity P(k)+Q(k)=1. If the range of the distribution is 1 to n inclusive then P(n)=1, Q(n)=0 while P(1) = p(1), Q(1)=1-p(1).  File: gsl-ref.info, Node: The Gaussian Distribution, Next: The Gaussian Tail Distribution, Prev: Random Number Distribution Introduction, Up: Random Number Distributions 20.2 The Gaussian Distribution ============================== -- Function: double gsl_ran_gaussian (const gsl_rng * R, double SIGMA) This function returns a Gaussian random variate, with mean zero and standard deviation SIGMA. The probability distribution for Gaussian random variates is, p(x) dx = {1 \over \sqrt{2 \pi \sigma^2}} \exp (-x^2 / 2\sigma^2) dx for x in the range -\infty to +\infty. Use the transformation z = \mu + x on the numbers returned by `gsl_ran_gaussian' to obtain a Gaussian distribution with mean \mu. This function uses the Box-Muller algorithm which requires two calls to the random number generator R. -- Function: double gsl_ran_gaussian_pdf (double X, double SIGMA) This function computes the probability density p(x) at X for a Gaussian distribution with standard deviation SIGMA, using the formula given above. -- Function: double gsl_ran_gaussian_ziggurat (const gsl_rng * R, double SIGMA) -- Function: double gsl_ran_gaussian_ratio_method (const gsl_rng * R, double SIGMA) This function computes a Gaussian random variate using the alternative Marsaglia-Tsang ziggurat and Kinderman-Monahan-Leva ratio methods. The Ziggurat algorithm is the fastest available algorithm in most cases. -- Function: double gsl_ran_ugaussian (const gsl_rng * R) -- Function: double gsl_ran_ugaussian_pdf (double X) -- Function: double gsl_ran_ugaussian_ratio_method (const gsl_rng * R) These functions compute results for the unit Gaussian distribution. They are equivalent to the functions above with a standard deviation of one, SIGMA = 1. -- Function: double gsl_cdf_gaussian_P (double X, double SIGMA) -- Function: double gsl_cdf_gaussian_Q (double X, double SIGMA) -- Function: double gsl_cdf_gaussian_Pinv (double P, double SIGMA) -- Function: double gsl_cdf_gaussian_Qinv (double Q, double SIGMA) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Gaussian distribution with standard deviation SIGMA. -- Function: double gsl_cdf_ugaussian_P (double X) -- Function: double gsl_cdf_ugaussian_Q (double X) -- Function: double gsl_cdf_ugaussian_Pinv (double P) -- Function: double gsl_cdf_ugaussian_Qinv (double Q) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the unit Gaussian distribution.  File: gsl-ref.info, Node: The Gaussian Tail Distribution, Next: The Bivariate Gaussian Distribution, Prev: The Gaussian Distribution, Up: Random Number Distributions 20.3 The Gaussian Tail Distribution =================================== -- Function: double gsl_ran_gaussian_tail (const gsl_rng * R, double A, double SIGMA) This function provides random variates from the upper tail of a Gaussian distribution with standard deviation SIGMA. The values returned are larger than the lower limit A, which must be positive. The method is based on Marsaglia's famous rectangle-wedge-tail algorithm (Ann. Math. Stat. 32, 894-899 (1961)), with this aspect explained in Knuth, v2, 3rd ed, p139,586 (exercise 11). The probability distribution for Gaussian tail random variates is, p(x) dx = {1 \over N(a;\sigma) \sqrt{2 \pi \sigma^2}} \exp (- x^2/(2 \sigma^2)) dx for x > a where N(a;\sigma) is the normalization constant, N(a;\sigma) = (1/2) erfc(a / sqrt(2 sigma^2)). -- Function: double gsl_ran_gaussian_tail_pdf (double X, double A, double SIGMA) This function computes the probability density p(x) at X for a Gaussian tail distribution with standard deviation SIGMA and lower limit A, using the formula given above. -- Function: double gsl_ran_ugaussian_tail (const gsl_rng * R, double A) -- Function: double gsl_ran_ugaussian_tail_pdf (double X, double A) These functions compute results for the tail of a unit Gaussian distribution. They are equivalent to the functions above with a standard deviation of one, SIGMA = 1.  File: gsl-ref.info, Node: The Bivariate Gaussian Distribution, Next: The Exponential Distribution, Prev: The Gaussian Tail Distribution, Up: Random Number Distributions 20.4 The Bivariate Gaussian Distribution ======================================== -- Function: void gsl_ran_bivariate_gaussian (const gsl_rng * R, double SIGMA_X, double SIGMA_Y, double RHO, double * X, double * Y) This function generates a pair of correlated Gaussian variates, with mean zero, correlation coefficient RHO and standard deviations SIGMA_X and SIGMA_Y in the x and y directions. The probability distribution for bivariate Gaussian random variates is, p(x,y) dx dy = {1 \over 2 \pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp (-(x^2/\sigma_x^2 + y^2/\sigma_y^2 - 2 \rho x y/(\sigma_x\sigma_y))/2(1-\rho^2)) dx dy for x,y in the range -\infty to +\infty. The correlation coefficient RHO should lie between 1 and -1. -- Function: double gsl_ran_bivariate_gaussian_pdf (double X, double Y, double SIGMA_X, double SIGMA_Y, double RHO) This function computes the probability density p(x,y) at (X,Y) for a bivariate Gaussian distribution with standard deviations SIGMA_X, SIGMA_Y and correlation coefficient RHO, using the formula given above.  File: gsl-ref.info, Node: The Exponential Distribution, Next: The Laplace Distribution, Prev: The Bivariate Gaussian Distribution, Up: Random Number Distributions 20.5 The Exponential Distribution ================================= -- Function: double gsl_ran_exponential (const gsl_rng * R, double MU) This function returns a random variate from the exponential distribution with mean MU. The distribution is, p(x) dx = {1 \over \mu} \exp(-x/\mu) dx for x >= 0. -- Function: double gsl_ran_exponential_pdf (double X, double MU) This function computes the probability density p(x) at X for an exponential distribution with mean MU, using the formula given above. -- Function: double gsl_cdf_exponential_P (double X, double MU) -- Function: double gsl_cdf_exponential_Q (double X, double MU) -- Function: double gsl_cdf_exponential_Pinv (double P, double MU) -- Function: double gsl_cdf_exponential_Qinv (double Q, double MU) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the exponential distribution with mean MU.  File: gsl-ref.info, Node: The Laplace Distribution, Next: The Exponential Power Distribution, Prev: The Exponential Distribution, Up: Random Number Distributions 20.6 The Laplace Distribution ============================= -- Function: double gsl_ran_laplace (const gsl_rng * R, double A) This function returns a random variate from the Laplace distribution with width A. The distribution is, p(x) dx = {1 \over 2 a} \exp(-|x/a|) dx for -\infty < x < \infty. -- Function: double gsl_ran_laplace_pdf (double X, double A) This function computes the probability density p(x) at X for a Laplace distribution with width A, using the formula given above. -- Function: double gsl_cdf_laplace_P (double X, double A) -- Function: double gsl_cdf_laplace_Q (double X, double A) -- Function: double gsl_cdf_laplace_Pinv (double P, double A) -- Function: double gsl_cdf_laplace_Qinv (double Q, double A) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Laplace distribution with width A.  File: gsl-ref.info, Node: The Exponential Power Distribution, Next: The Cauchy Distribution, Prev: The Laplace Distribution, Up: Random Number Distributions 20.7 The Exponential Power Distribution ======================================= -- Function: double gsl_ran_exppow (const gsl_rng * R, double A, double B) This function returns a random variate from the exponential power distribution with scale parameter A and exponent B. The distribution is, p(x) dx = {1 \over 2 a \Gamma(1+1/b)} \exp(-|x/a|^b) dx for x >= 0. For b = 1 this reduces to the Laplace distribution. For b = 2 it has the same form as a Gaussian distribution, but with a = \sqrt{2} \sigma. -- Function: double gsl_ran_exppow_pdf (double X, double A, double B) This function computes the probability density p(x) at X for an exponential power distribution with scale parameter A and exponent B, using the formula given above. -- Function: double gsl_cdf_exppow_P (double X, double A, double B) -- Function: double gsl_cdf_exppow_Q (double X, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) for the exponential power distribution with parameters A and B.  File: gsl-ref.info, Node: The Cauchy Distribution, Next: The Rayleigh Distribution, Prev: The Exponential Power Distribution, Up: Random Number Distributions 20.8 The Cauchy Distribution ============================ -- Function: double gsl_ran_cauchy (const gsl_rng * R, double A) This function returns a random variate from the Cauchy distribution with scale parameter A. The probability distribution for Cauchy random variates is, p(x) dx = {1 \over a\pi (1 + (x/a)^2) } dx for x in the range -\infty to +\infty. The Cauchy distribution is also known as the Lorentz distribution. -- Function: double gsl_ran_cauchy_pdf (double X, double A) This function computes the probability density p(x) at X for a Cauchy distribution with scale parameter A, using the formula given above. -- Function: double gsl_cdf_cauchy_P (double X, double A) -- Function: double gsl_cdf_cauchy_Q (double X, double A) -- Function: double gsl_cdf_cauchy_Pinv (double P, double A) -- Function: double gsl_cdf_cauchy_Qinv (double Q, double A) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Cauchy distribution with scale parameter A.  File: gsl-ref.info, Node: The Rayleigh Distribution, Next: The Rayleigh Tail Distribution, Prev: The Cauchy Distribution, Up: Random Number Distributions 20.9 The Rayleigh Distribution ============================== -- Function: double gsl_ran_rayleigh (const gsl_rng * R, double SIGMA) This function returns a random variate from the Rayleigh distribution with scale parameter SIGMA. The distribution is, p(x) dx = {x \over \sigma^2} \exp(- x^2/(2 \sigma^2)) dx for x > 0. -- Function: double gsl_ran_rayleigh_pdf (double X, double SIGMA) This function computes the probability density p(x) at X for a Rayleigh distribution with scale parameter SIGMA, using the formula given above. -- Function: double gsl_cdf_rayleigh_P (double X, double SIGMA) -- Function: double gsl_cdf_rayleigh_Q (double X, double SIGMA) -- Function: double gsl_cdf_rayleigh_Pinv (double P, double SIGMA) -- Function: double gsl_cdf_rayleigh_Qinv (double Q, double SIGMA) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Rayleigh distribution with scale parameter SIGMA.  File: gsl-ref.info, Node: The Rayleigh Tail Distribution, Next: The Landau Distribution, Prev: The Rayleigh Distribution, Up: Random Number Distributions 20.10 The Rayleigh Tail Distribution ==================================== -- Function: double gsl_ran_rayleigh_tail (const gsl_rng * R, double A, double SIGMA) This function returns a random variate from the tail of the Rayleigh distribution with scale parameter SIGMA and a lower limit of A. The distribution is, p(x) dx = {x \over \sigma^2} \exp ((a^2 - x^2) /(2 \sigma^2)) dx for x > a. -- Function: double gsl_ran_rayleigh_tail_pdf (double X, double A, double SIGMA) This function computes the probability density p(x) at X for a Rayleigh tail distribution with scale parameter SIGMA and lower limit A, using the formula given above.  File: gsl-ref.info, Node: The Landau Distribution, Next: The Levy alpha-Stable Distributions, Prev: The Rayleigh Tail Distribution, Up: Random Number Distributions 20.11 The Landau Distribution ============================= -- Function: double gsl_ran_landau (const gsl_rng * R) This function returns a random variate from the Landau distribution. The probability distribution for Landau random variates is defined analytically by the complex integral, p(x) = (1/(2 \pi i)) \int_{c-i\infty}^{c+i\infty} ds exp(s log(s) + x s) For numerical purposes it is more convenient to use the following equivalent form of the integral, p(x) = (1/\pi) \int_0^\infty dt \exp(-t \log(t) - x t) \sin(\pi t). -- Function: double gsl_ran_landau_pdf (double X) This function computes the probability density p(x) at X for the Landau distribution using an approximation to the formula given above.  File: gsl-ref.info, Node: The Levy alpha-Stable Distributions, Next: The Levy skew alpha-Stable Distribution, Prev: The Landau Distribution, Up: Random Number Distributions 20.12 The Levy alpha-Stable Distributions ========================================= -- Function: double gsl_ran_levy (const gsl_rng * R, double C, double ALPHA) This function returns a random variate from the Levy symmetric stable distribution with scale C and exponent ALPHA. The symmetric stable probability distribution is defined by a Fourier transform, p(x) = {1 \over 2 \pi} \int_{-\infty}^{+\infty} dt \exp(-it x - |c t|^alpha) There is no explicit solution for the form of p(x) and the library does not define a corresponding `pdf' function. For \alpha = 1 the distribution reduces to the Cauchy distribution. For \alpha = 2 it is a Gaussian distribution with \sigma = \sqrt{2} c. For \alpha < 1 the tails of the distribution become extremely wide. The algorithm only works for 0 < alpha <= 2.  File: gsl-ref.info, Node: The Levy skew alpha-Stable Distribution, Next: The Gamma Distribution, Prev: The Levy alpha-Stable Distributions, Up: Random Number Distributions 20.13 The Levy skew alpha-Stable Distribution ============================================= -- Function: double gsl_ran_levy_skew (const gsl_rng * R, double C, double ALPHA, double BETA) This function returns a random variate from the Levy skew stable distribution with scale C, exponent ALPHA and skewness parameter BETA. The skewness parameter must lie in the range [-1,1]. The Levy skew stable probability distribution is defined by a Fourier transform, p(x) = {1 \over 2 \pi} \int_{-\infty}^{+\infty} dt \exp(-it x - |c t|^alpha (1-i beta sign(t) tan(pi alpha/2))) When \alpha = 1 the term \tan(\pi \alpha/2) is replaced by -(2/\pi)\log|t|. There is no explicit solution for the form of p(x) and the library does not define a corresponding `pdf' function. For \alpha = 2 the distribution reduces to a Gaussian distribution with \sigma = \sqrt{2} c and the skewness parameter has no effect. For \alpha < 1 the tails of the distribution become extremely wide. The symmetric distribution corresponds to \beta = 0. The algorithm only works for 0 < alpha <= 2. The Levy alpha-stable distributions have the property that if N alpha-stable variates are drawn from the distribution p(c, \alpha, \beta) then the sum Y = X_1 + X_2 + \dots + X_N will also be distributed as an alpha-stable variate, p(N^(1/\alpha) c, \alpha, \beta).  File: gsl-ref.info, Node: The Gamma Distribution, Next: The Flat (Uniform) Distribution, Prev: The Levy skew alpha-Stable Distribution, Up: Random Number Distributions 20.14 The Gamma Distribution ============================ -- Function: double gsl_ran_gamma (const gsl_rng * R, double A, double B) This function returns a random variate from the gamma distribution. The distribution function is, p(x) dx = {1 \over \Gamma(a) b^a} x^{a-1} e^{-x/b} dx for x > 0. The gamma distribution with an integer parameter A is known as the Erlang distribution. The variates are computed using the Marsaglia-Tsang fast gamma method. This function for this method was previously called `gsl_ran_gamma_mt' and can still be accessed using this name. -- Function: double gsl_ran_gamma_knuth (const gsl_rng * R, double A, double B) This function returns a gamma variate using the algorithms from Knuth (vol 2). -- Function: double gsl_ran_gamma_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a gamma distribution with parameters A and B, using the formula given above. -- Function: double gsl_cdf_gamma_P (double X, double A, double B) -- Function: double gsl_cdf_gamma_Q (double X, double A, double B) -- Function: double gsl_cdf_gamma_Pinv (double P, double A, double B) -- Function: double gsl_cdf_gamma_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the gamma distribution with parameters A and B.  File: gsl-ref.info, Node: The Flat (Uniform) Distribution, Next: The Lognormal Distribution, Prev: The Gamma Distribution, Up: Random Number Distributions 20.15 The Flat (Uniform) Distribution ===================================== -- Function: double gsl_ran_flat (const gsl_rng * R, double A, double B) This function returns a random variate from the flat (uniform) distribution from A to B. The distribution is, p(x) dx = {1 \over (b-a)} dx if a <= x < b and 0 otherwise. -- Function: double gsl_ran_flat_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a uniform distribution from A to B, using the formula given above. -- Function: double gsl_cdf_flat_P (double X, double A, double B) -- Function: double gsl_cdf_flat_Q (double X, double A, double B) -- Function: double gsl_cdf_flat_Pinv (double P, double A, double B) -- Function: double gsl_cdf_flat_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for a uniform distribution from A to B.  File: gsl-ref.info, Node: The Lognormal Distribution, Next: The Chi-squared Distribution, Prev: The Flat (Uniform) Distribution, Up: Random Number Distributions 20.16 The Lognormal Distribution ================================ -- Function: double gsl_ran_lognormal (const gsl_rng * R, double ZETA, double SIGMA) This function returns a random variate from the lognormal distribution. The distribution function is, p(x) dx = {1 \over x \sqrt{2 \pi \sigma^2} } \exp(-(\ln(x) - \zeta)^2/2 \sigma^2) dx for x > 0. -- Function: double gsl_ran_lognormal_pdf (double X, double ZETA, double SIGMA) This function computes the probability density p(x) at X for a lognormal distribution with parameters ZETA and SIGMA, using the formula given above. -- Function: double gsl_cdf_lognormal_P (double X, double ZETA, double SIGMA) -- Function: double gsl_cdf_lognormal_Q (double X, double ZETA, double SIGMA) -- Function: double gsl_cdf_lognormal_Pinv (double P, double ZETA, double SIGMA) -- Function: double gsl_cdf_lognormal_Qinv (double Q, double ZETA, double SIGMA) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the lognormal distribution with parameters ZETA and SIGMA.  File: gsl-ref.info, Node: The Chi-squared Distribution, Next: The F-distribution, Prev: The Lognormal Distribution, Up: Random Number Distributions 20.17 The Chi-squared Distribution ================================== The chi-squared distribution arises in statistics. If Y_i are n independent Gaussian random variates with unit variance then the sum-of-squares, X_i = \sum_i Y_i^2 has a chi-squared distribution with n degrees of freedom. -- Function: double gsl_ran_chisq (const gsl_rng * R, double NU) This function returns a random variate from the chi-squared distribution with NU degrees of freedom. The distribution function is, p(x) dx = {1 \over 2 \Gamma(\nu/2) } (x/2)^{\nu/2 - 1} \exp(-x/2) dx for x >= 0. -- Function: double gsl_ran_chisq_pdf (double X, double NU) This function computes the probability density p(x) at X for a chi-squared distribution with NU degrees of freedom, using the formula given above. -- Function: double gsl_cdf_chisq_P (double X, double NU) -- Function: double gsl_cdf_chisq_Q (double X, double NU) -- Function: double gsl_cdf_chisq_Pinv (double P, double NU) -- Function: double gsl_cdf_chisq_Qinv (double Q, double NU) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the chi-squared distribution with NU degrees of freedom.  File: gsl-ref.info, Node: The F-distribution, Next: The t-distribution, Prev: The Chi-squared Distribution, Up: Random Number Distributions 20.18 The F-distribution ======================== The F-distribution arises in statistics. If Y_1 and Y_2 are chi-squared deviates with \nu_1 and \nu_2 degrees of freedom then the ratio, X = { (Y_1 / \nu_1) \over (Y_2 / \nu_2) } has an F-distribution F(x;\nu_1,\nu_2). -- Function: double gsl_ran_fdist (const gsl_rng * R, double NU1, double NU2) This function returns a random variate from the F-distribution with degrees of freedom NU1 and NU2. The distribution function is, p(x) dx = { \Gamma((\nu_1 + \nu_2)/2) \over \Gamma(\nu_1/2) \Gamma(\nu_2/2) } \nu_1^{\nu_1/2} \nu_2^{\nu_2/2} x^{\nu_1/2 - 1} (\nu_2 + \nu_1 x)^{-\nu_1/2 -\nu_2/2} for x >= 0. -- Function: double gsl_ran_fdist_pdf (double X, double NU1, double NU2) This function computes the probability density p(x) at X for an F-distribution with NU1 and NU2 degrees of freedom, using the formula given above. -- Function: double gsl_cdf_fdist_P (double X, double NU1, double NU2) -- Function: double gsl_cdf_fdist_Q (double X, double NU1, double NU2) -- Function: double gsl_cdf_fdist_Pinv (double P, double NU1, double NU2) -- Function: double gsl_cdf_fdist_Qinv (double Q, double NU1, double NU2) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the F-distribution with NU1 and NU2 degrees of freedom.  File: gsl-ref.info, Node: The t-distribution, Next: The Beta Distribution, Prev: The F-distribution, Up: Random Number Distributions 20.19 The t-distribution ======================== The t-distribution arises in statistics. If Y_1 has a normal distribution and Y_2 has a chi-squared distribution with \nu degrees of freedom then the ratio, X = { Y_1 \over \sqrt{Y_2 / \nu} } has a t-distribution t(x;\nu) with \nu degrees of freedom. -- Function: double gsl_ran_tdist (const gsl_rng * R, double NU) This function returns a random variate from the t-distribution. The distribution function is, p(x) dx = {\Gamma((\nu + 1)/2) \over \sqrt{\pi \nu} \Gamma(\nu/2)} (1 + x^2/\nu)^{-(\nu + 1)/2} dx for -\infty < x < +\infty. -- Function: double gsl_ran_tdist_pdf (double X, double NU) This function computes the probability density p(x) at X for a t-distribution with NU degrees of freedom, using the formula given above. -- Function: double gsl_cdf_tdist_P (double X, double NU) -- Function: double gsl_cdf_tdist_Q (double X, double NU) -- Function: double gsl_cdf_tdist_Pinv (double P, double NU) -- Function: double gsl_cdf_tdist_Qinv (double Q, double NU) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the t-distribution with NU degrees of freedom.  File: gsl-ref.info, Node: The Beta Distribution, Next: The Logistic Distribution, Prev: The t-distribution, Up: Random Number Distributions 20.20 The Beta Distribution =========================== -- Function: double gsl_ran_beta (const gsl_rng * R, double A, double B) This function returns a random variate from the beta distribution. The distribution function is, p(x) dx = {\Gamma(a+b) \over \Gamma(a) \Gamma(b)} x^{a-1} (1-x)^{b-1} dx for 0 <= x <= 1. -- Function: double gsl_ran_beta_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a beta distribution with parameters A and B, using the formula given above. -- Function: double gsl_cdf_beta_P (double X, double A, double B) -- Function: double gsl_cdf_beta_Q (double X, double A, double B) -- Function: double gsl_cdf_beta_Pinv (double P, double A, double B) -- Function: double gsl_cdf_beta_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the beta distribution with parameters A and B.  File: gsl-ref.info, Node: The Logistic Distribution, Next: The Pareto Distribution, Prev: The Beta Distribution, Up: Random Number Distributions 20.21 The Logistic Distribution =============================== -- Function: double gsl_ran_logistic (const gsl_rng * R, double A) This function returns a random variate from the logistic distribution. The distribution function is, p(x) dx = { \exp(-x/a) \over a (1 + \exp(-x/a))^2 } dx for -\infty < x < +\infty. -- Function: double gsl_ran_logistic_pdf (double X, double A) This function computes the probability density p(x) at X for a logistic distribution with scale parameter A, using the formula given above. -- Function: double gsl_cdf_logistic_P (double X, double A) -- Function: double gsl_cdf_logistic_Q (double X, double A) -- Function: double gsl_cdf_logistic_Pinv (double P, double A) -- Function: double gsl_cdf_logistic_Qinv (double Q, double A) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the logistic distribution with scale parameter A.  File: gsl-ref.info, Node: The Pareto Distribution, Next: Spherical Vector Distributions, Prev: The Logistic Distribution, Up: Random Number Distributions 20.22 The Pareto Distribution ============================= -- Function: double gsl_ran_pareto (const gsl_rng * R, double A, double B) This function returns a random variate from the Pareto distribution of order A. The distribution function is, p(x) dx = (a/b) / (x/b)^{a+1} dx for x >= b. -- Function: double gsl_ran_pareto_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a Pareto distribution with exponent A and scale B, using the formula given above. -- Function: double gsl_cdf_pareto_P (double X, double A, double B) -- Function: double gsl_cdf_pareto_Q (double X, double A, double B) -- Function: double gsl_cdf_pareto_Pinv (double P, double A, double B) -- Function: double gsl_cdf_pareto_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Pareto distribution with exponent A and scale B.