TPapprox.mw

Generating Efficient Numerical Evaluation Routines for Bivariate Functions via Tensor Product Series 

Keith O. Geddes
Symbolic Computation Group
David R. Cheriton School of Computer Science
University of Waterloo, Waterloo, Ontario N2L 3G1 Canada

Joint work with Frederick W. Chapman and Xiang Wang
 

Abstract 

Given a bivariate function we consider the problem of generating, in an automated manner, a routine for efficient numerical evaluation at any point in a specified rectangular region of the x-y plane. The goals for the generated routine are: accuracy (for a specified fixed precision), efficiency, and the generated routine must be "purely numerical" in the sense that it can be translated into a language such as C and can be compiled for inclusion in a numerical library. For example, we are able to generate numerical evaluation routines for the various Bessel functions (with the order treated as a real-valued variable), and other bivariate functions, with significantly increased speed of evaluation compared with current implementations. 

We exploit the capabilities of a computer algebra system to achieve the desired level of automation. A fundamental tool in our method is the natural tensor product series developed in a doctoral thesis by Frederick W. Chapman in 2003. Using this technique, f(x, y) is approximated by an interpolation series such that each term in the series is a tensor product `*`(c[i], `*`(g[i](x), `*`(h[i](y)))). The efficiency of approximation achieved by this method derives from the fact that the univariate basis functions are cross-sections of the original bivariate function. The bivariate approximation problem is thereby reduced to a sequence of univariate approximation problems which can be handled by various well-known techniques. Assuming that the univariate basis functions are analytic with isolated singularities, we apply singularity-handling techniques to ensure the efficiency of the univariate approximations. 

Hybrid Symbolic-Numeric Integration (1-D) 

 

 

Example 2.1: Series expansion at a singularity 

> `assign`(g, proc (x) options operator, arrow; `*`(sin(x), `*`(ln(x), `*`(exp(`+`(`-`(`*`(`^`(x, 3)))))))) end proc)
 

proc (x) options operator, arrow; `*`(sin(x), `*`(ln(x), `*`(exp(`+`(`-`(`*`(`^`(x, 3)))))))) end proc (2.1.1)
 

> plot(g, 0 .. 3)
 

Plot_2d
 

>
 

 

Form the integral on 0, 3 and apply numerical integration. 

 

> `assign`(intg, Int(g(x), x = 0 .. 3))
 

Int(`*`(sin(x), `*`(ln(x), `*`(exp(`+`(`-`(`*`(`^`(x, 3)))))))), x = 0 .. 3) (2.1.2)
 

> evalf(intg)
 

-.1957885158 (2.1.3)
 

 

Evaluating numerically to 25 digits yields the following result. 

 

> evalf[25](intg)
 

-.1957885158488077265323200 (2.1.4)
 

 

Note that the integrand has a logarithmic singularity at x=0 as the following series expansion shows. 

 

> series(g(x), x = 0)
 

series(`+`(`*`(ln(x), `*`(x)), `-`(`*`(`*`(`/`(1, 6), `*`(ln(x))), `*`(`^`(x, 3)))), `-`(`*`(ln(x), `*`(`^`(x, 4)))), `*`(`*`(`/`(1, 120), `*`(ln(x))), `*`(`^`(x, 5))))+O(`^`(x, 6)),x,6) (2.1.5)
 

 

The hybrid symbolic-numeric technique (automated in Maple) includes the concept of term-by-term integration of such a series expansion, employing symbolic integration for the individual terms. 

 

>
 

Example 2.2: Subtracting off a singularity 

> restart
 

> `assign`(h, proc (x) options operator, arrow; ln(`+`(1, `-`(cos(`+`(`*`(2, `*`(x))))))) end proc)
 

proc (x) options operator, arrow; ln(`+`(1, `-`(cos(`+`(`*`(2, `*`(x))))))) end proc (2.2.1)
 

> Int(h(x), x = 0 .. 1)
 

Int(ln(`+`(1, `-`(cos(`+`(`*`(2, `*`(x))))))), x = 0 .. 1) (2.2.2)
 

>
 

 

Note that the graph of this integrand goes to `+`(`-`(infinity)) at x = 0. However, this is an integrable singularity. It behaves like ln(x) near x = 0. 

 

> plot(h, 0 .. 1)
 

Plot_2d
 

>
 

 

Note: The technique of subtracting off a singularity illustrated below, takes place automatically within Maple's numerical integration routines. 

 

Suppose it is desired to compute the result to 25 digits of accuracy. 

 

> `assign`(Digits, 25)
 

25 (2.2.3)
 

 

The generalized series expansion of  h  at  x = 0  takes the following form. 

 

> series(h(x), x = 0, 8)
 

series(`+`(`+`(ln(2), `*`(2, `*`(ln(x)))), `-`(`*`(`/`(1, 3), `*`(`^`(x, 2)))), `-`(`*`(`/`(1, 90), `*`(`^`(x, 4)))), `-`(`*`(`/`(2, 2835), `*`(`^`(x, 6)))))+O(`^`(x, 8)),x,8) (2.2.4)
 

 

The non-regular part is 

 

> `assign`(q, proc (x) options operator, arrow; `+`(`*`(2, `*`(ln(x)))) end proc)
 

proc (x) options operator, arrow; `+`(`*`(2, `*`(ln(x)))) end proc (2.2.5)
 

 

The new expression 

 

> `assign`(newh, `+`(h(x), `-`(q(x))))
 

`+`(ln(`+`(1, `-`(cos(`+`(`*`(2, `*`(x))))))), `-`(`*`(2, `*`(ln(x))))) (2.2.6)
 

 

is analytic on the interval [0, 1]. Thus it can be integrated easily by the default numerical integration method. 

 

> Int(newh, x = 0 .. 1)
 

Int(`+`(ln(`+`(1, `-`(cos(`+`(`*`(2, `*`(x))))))), `-`(`*`(2, `*`(ln(x))))), x = 0 .. 1) (2.2.7)
 

> `assign`(r1, evalf(%))
 

.5797067685767754431529296 (2.2.8)
 

>
 

 

Integrating  q  is easy because it has the indefinite integral 

 

> int(q(x), x)
 

`+`(`*`(2, `*`(x, `*`(ln(x)))), `-`(`*`(2, `*`(x)))) (2.2.9)
 

 

and its definite integral can therefore be computed symbolically. 

 

> `assign`(r2, int(q(x), x = 0 .. 1))
 

-2 (2.2.10)
 

 

Finally, summing the two values, we obtain the value for the original definite integration problem. 

 

> Int(h(x), x = 0 .. 1) = `+`(r1, r2)
 

Int(ln(`+`(1, `-`(cos(`+`(`*`(2, `*`(x))))))), x = 0 .. 1) = -1.420293231423224556847070 (2.2.11)
 

>
 

Example 2.3: Algebraic transformation of variables 

> restart
 

> `assign`(F, proc (x) options operator, arrow; sqrt(sin(x)) end proc)
 

proc (x) options operator, arrow; sqrt(sin(x)) end proc (2.3.1)
 

> `assign`(intF, Int(F(x), x = 0 .. 2))
 

Int(`*`(`^`(sin(x), `/`(1, 2))), x = 0 .. 2) (2.3.2)
 

 

Note: The change of variables illustrated below takes place automatically within Maple's numerical integration routines. 

 

The generalized series expansion of F(x) at x = 0 is of the form 

 

> series(F(x), x = 0, 5)
 

`+`(`*`(`^`(x, `/`(1, 2))), `-`(`*`(`/`(1, 12), `*`(`^`(x, `/`(5, 2))))), O(`*`(`^`(x, `/`(9, 2))))) (2.3.3)
 

 

Applying the change of variables t = sqrt(x)  (i.e., x = `*`(`^`(t, 2))) yields 

 

> `assign`(r3, IntegrationTools:-Change(intF, x = `*`(`^`(t, 2))))
 

Int(`+`(`*`(2, `*`(`^`(sin(`*`(`^`(t, 2))), `/`(1, 2)), `*`(t)))), t = 0 .. `*`(`^`(2, `/`(1, 2)))) (2.3.4)
 

>
 

 

The new integrand is analytic on the interval of integration. 

Thus it can be integrated easily by the default numerical integration method, even at high accuracy. 

 

Suppose that the result is desired to 50 digits of accuracy. 

 

> evalf[50](r3)
 

1.6207234083199665050793106079159335362211371877592 (2.3.5)
 

>
 

Introduction to Tensor Product Series 

The main tool for generating approximations of bivariate functions will be tensor product series. The key concept is interpolation of bivariate functions by what is known as a natural tensor product [Thomas76]. The Ph.D thesis by Chapman [Chapman03] developed the theory of tensor product series and we presented an ISSAC'05 paper [CarvajalChapmanGeddes05] which applied the theory to develop a hybrid symbolic-numeric method for the calculation of multidimensional integrals. 

 

In our ISSAC'05 paper, we restricted attention to the case of symmetric functions which was sufficient for the application to multidimensional integration. We have now extended the method to an efficient implemention of tensor product interpolation for general non-symmetric functions. The Masters thesis by Xiang (Sophy) Wang [Wang08] presented results of applying this technique to the generation of numerical evaluation routines for bivariate functions. 

Tensor Products 

  • A tensor product is a finite sum of terms where each term is a product of univariate functions:
 

s[n](x, y) = Sum(`*`(g[i](x), `*`(h[i](y))), i = 1 .. n) 

 

  • There are many examples in mathematics:
 

exp(`+`(x, y)) = `*`(exp(x), `*`(exp(y))) 

 

cos(`+`(x, y)) = `+`(`*`(cos(x), `*`(cos(y))), `-`(`*`(sin(x), `*`(sin(y))))) 

 

`*`(`^`(`+`(x, y), 2)) = `+`(`*`(`^`(x, 2)), `*`(2, `*`(x, `*`(y))), `*`(`^`(y, 2))) 

 

  • The rank of a tensor product is the minimum number of terms among all the equivalent representations.
 

  • It is a natural tensor product when the factors of each term can be derived from the original function by a finite linear combination of linear functionals.
 

 

The Splitting Operator 

  • For a bivariate function  fand a point a, bwhere  `<>`(f(a, b), 0), the splitting operator Upsilon[a, b]is defined by
       `*`(Upsilon[a, b], `*`(f(x, y))) = `/`(`*`(f(x, b), `*`(f(a, y))), `*`(f(a, b))).
 

  • The point a, b is called a splitting point.
 

  • Note: Upsilon[a, b] splits  f  into a rank-one tensor product.
 

  • `*`(Upsilon[a, b], `*`(f(x, y)))  interpolates  f(x, y) on the lines x = a and y = b , which means that  `*`(Upsilon[a, b], `*`(`≡`(f(a, y), f(a, y))))  and  `*`(Upsilon[a, b], `*`(`≡`(f(x, b), f(x, b)))).
 

  • These properties allow us to generate a natural tensor product series simply by iterating the splitting operator while varying the choice of splitting point.
 

 

Tensor Product Series Expansions (a la Newton Interpolation Series) 

  • Not every bivariate function  f  can be expressed as a tensor product of finite rank.
 

  • For an approximation in  R = Typesetting:-delayCrossProduct([a, b], [c, d]) where  f  is continuous, we can uniformly approximate  f  by a rank-ntensor product:
 

s[n](x, y) = Sum(`*`(g[i](x), `*`(h[i](y))), i = 1 .. n) 

  • Define the `^`(n, th) remainder :   
 

  • The following simple algorithm generates a natural tensor product series expansion  s[n]  of a given function  f  on R .
 

Algorithm NTPS 

  • . Set  `assign`(r[0], f)  and initialize `assign`(i, 1) .
 

  • . While `>`(LinearAlgebra:-Norm(r[`+`(i, `-`(1))], infinity), delta) , iterate the following:
 

  • ) Choose `in`(a[i], b[i], R) with  abs(r[`+`(i, `-`(1))](a[i], b[i])) = LinearAlgebra:-Norm(r[`+`(i, `-`(1))], infinity) .
 

  • ) Let  `assign`(r[i], `+`(r[`+`(i, `-`(1))], `-`(`*`(Upsilon[a[i], b[i]], `*`(r[`+`(i, `-`(1))])))))  and  `assign`(i, `+`(i, 1)) .
 

  • . Let  `assign`(n, `+`(i, `-`(1)))  and return  `assign`(s[n], `+`(f, `-`(r[n]))) to approximate  f  with uniform error `<=`(LinearAlgebra:-Norm(r[n], infinity), delta)  on R .
 

 

  • Refining terminology introduced by Chapman in his doctoral thesis, we call  s[n]  the Geddes-Newton series expansion of  f  to  n  terms with respect to the splitting points  `^`({a[i], b[i]}[i = 1], n) .
 

  • Comment on computational cost: Note that Algorithm NTPS in its most general form as presented above has one expensive step. Specifically, step 2(a) calls for a two-dimensional search of the region Typesetting:-delayCrossProduct([a, b], [c, d]) for the point where the numerical maximum is attained for the absolute remainder abs(r[`+`(i, `-`(1))](x, y)) .
 

  • Achieving efficient computation: It turns out to be sufficient to replace step 2(a) by two one-dimensional searches (or a single one-dimensional search in the case of symmetric functions). Moreover, for the one-dimensional searches it is sufficient to sample a discrete set of points defined by the midpoints of previous splitting points. (This is based on computational experience; some proofs are needed here.)
 

 

Example 3.1 

 

Consider the following symmetric function on  Typesetting:-delayCrossProduct([0, 1], [0, 1]) : 

f(x, y) = `*`(exp(`*`(`^`(x, 2), `*`(`^`(y, 2)))), `*`(cos(`+`(x, y)))) 

 

The Geddes-Newton series to 3 terms is as follows, where the splitting points are a, a for  a = 1, 0, .616336 : 

s[3](x, y) = Sum(`*`(c[i], `*`(g[i](x), `*`(h[i](y)))), i = 1 .. 3) 

 

where 

c[1] = -.884017 

g[1](x) = `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(x, 1)))) 

h[1](y) = `*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(y, 1)))) 

c[2] = .794868 

g[2](x) = `+`(cos(x), `*`(.477636, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(x, 1)))))) 

h[2](y) = `+`(cos(y), `*`(.477636, `*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(y, 1)))))) 

c[3] = -9.83284 

g[3](x) = `+`(`*`(exp(`+`(`*`(.379870, `*`(`^`(x, 2))))), `*`(cos(`+`(x, .616336)))), `-`(`*`(.356576, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(x, 1)))))), `-`(`*`(.623342, `*`(cos(x))))) 

h[3](y) = `+`(`*`(exp(`+`(`*`(.379870, `*`(`^`(y, 2))))), `*`(cos(`+`(y, .616336)))), `-`(`*`(.356576, `*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(y, 1)))))), `-`(`*`(.623342, `*`(cos(y))))) 

 

Observations 

  • For the symmetric function f(x, y) of Example 3.1, the splitting points are all located on the diagonal  y = x of the unit square, as illustrated in the following figure.
 

 

Plot_2d
 

 

  • Note:  s[3](x, y) agrees exactly with  f(x, y) at all points on the horizontal and vertical lines through the three splitting points!
 

  • With this strong interpolating property, even with only three splitting points we find that the maximum error over the unit square satisfies:
 

 

  • If we request a series accurate to 15 digits for the function  f(x, y) defined in Example 3.1, we find that only 12 terms are required (i.e., 12 splitting points). In other words,
 

 

  • The distribution of the 12 splitting points for this particular function is illustrated in the following figure.
 

 

Plot_2d
 

 

Convergence of the Series (Informal) 

  • Note similarities with the case of univariate polynomial interpolation via the Newton interpolation series.
 

  • In both cases, addition of a new term in the series corresponds to the addition of a new interpolation point.
 

  • However, in the case of Geddes-Newton series, the interpolation property holds not only at the new point but at all points on two lines passing through that point.
 

  • Intuitively, for any function  f  which is continuous in the unit square U, if the splitting points are dense along the diagonal then it is reasonable to expect that  s[n]  might converge uniformly to  f  on U as proc (n) options operator, arrow; infinity end proc.
 

  • In other words, by choosing well distributed splitting points on the diagonal, we can cover the whole unit square with small grid cells.  Recall: The remainder r[n] vanishes on the boundary of each cell .
 

  • Some formal convergence results are discussed later in this presentation.
 

A Brief Glance at a Proof Technique 

  • The easiest case for proving convergence is when  f  is analytic in a sufficiently large region.
 

  • The convergence theorem and its proof are similar to the classical case where  f(z) is a function of a single complex variable and s[n](z) denotes the interpolating polynomial.
 

  • Recall the univariate case (Ref:  P.J. Davis, Interpolation and Approximation [Davis63]), as summarized below.
 

  • Basic tool: A contour integral representation of  `+`(f(z), `-`(s[n](z))) .
 

 

 

Drawing-Canvas 

 

  • Let  f  be analytic on  `⊂`(S, complex)  and suppose that the interpolation points satisfy  `in`(a[i], `⊂`(A, S)) .
 

  • Let  s[n](z)  be the polynomial interpolating  f(z) at points {a[i]}.
 

  • `assign`(alpha, diam(A)),  `assign`(delta, dist(A, `∂`(S))).
 

  • Then for all  `in`(z, A) , `<=`(abs(`+`(f(z), `-`(s[n](z)))), `*`(M, `*`(`^`(`/`(`*`(alpha), `*`(delta)), n)))) where  M = Typesetting:-delayCrossProduct(`*`(abs(`∂`(S)), `*`(`/`(`+`(`*`(2, `*`(Pi, `*`(delta))))))), LinearAlgebra:-Norm(f, `∂`(S))) .
 

  • Conclusion:  If  `>`(delta, alpha)  then limit(LinearAlgebra:-Norm(`+`(f, `-`(s[n])), A), n = infinity) = 0 .
 

 

 

 

 

  • For our case, we consider a bivariate function  f(w, z)  of two complex variables and  s[n](w, z)  denotes the Geddes-Newton series expansion of  f  to  n  terms with respect to the splitting points  {a[i], b[i]} .
 

  • Notation:   `in`(z, T)  and  S, `in`(b[i], `⊂`(B, T)) .
 

  • We derive a contour integration formula for the remainder  r[n] = `+`(f, `-`(s[n])) :  
 

  • Here,  p[n](w) = product(`+`(w, `-`(a[k])), k = 1 .. n)  and  q[n](z) = product(`+`(z, `-`(b[k])), k = 1 .. n) .
 

  • Then we can conclude that LinearAlgebra:-Norm(r[n], Typesetting:-delayCrossProduct(A, B))  where the constant  M  depends on the geometry, and gamma depends on separation constants (like before).
 

  • If  `<`(gamma, 1) and if the  `^`(n, th) boundary constant satisfies as proc (n) options operator, arrow; infinity end proc then limit(LinearAlgebra:-Norm(`+`(f, `-`(s[n])), Typesetting:-delayCrossProduct(A, B)), n = infinity) = 0 .
 

 

Linear Algebra Formulation of Tensor Product Series 

 

 

 

 

s[n](x, y) = Sum(`*`(c[i], `*`(Sum(`*`(k[i, j], `*`(f(x, b[j]))), j = 1 .. i), `*`(Sum(`*`(l[i, j], `*`(f(a[j], y))), j = 1 .. i)))), i = 1 .. n) 

where c[i], k[i, j], `<>`(l[i, j], 0) are real-valued constants, and  a[i], b[i]  is the splitting point used in the `^`(i, th) iteration. 

 

 

s[n](x, y) = `*`(`^`(V(x), T), `*`(U, `*`(D, `*`(L, `*`(W(y)))))) 

where 

 

 

 

 

 

Example 4.1 

The tensor product series of rank 3 generated in Example 3.1 for the symmetric function  `*`(exp(`*`(`^`(x, 2), `*`(`^`(y, 2)))), `*`(cos(`+`(x, y))))  is represented in the following matrix-vector form. 

 

`assign`(Digits, 6); -1 

`assign`(f, proc (x, y) options operator, arrow; `*`(exp(`*`(`^`(x, 2), `*`(`^`(y, 2)))), `*`(cos(`+`(x, y)))) end proc) 

proc (x, y) options operator, arrow; `*`(exp(`*`(`^`(x, 2), `*`(`^`(y, 2)))), `*`(cos(`+`(x, y)))) end proc (4.1.1)
 

`assign`(a, [1, 0, .616336]) 

[1, 0, .616336] (4.1.2)
 

`assign`(Wy, Vector([f(a[1], y), f(a[2], y), f(a[3], y)])) 

Vector[column](%id = 396823760) (4.1.3)
 

`assign`(Vx, subs(y = x, Wy)) 

Vector[column](%id = 394857920) (4.1.4)
 

`assign`(L, Matrix(3, 3, [[1, 0, 0], [.477636, 1, 0], [-.356576, -.623342, 1]])) 

Matrix(%id = 393074408) (4.1.5)
 

`assign`(U, `^`(L, %T)) 

Matrix(%id = 170084936) (4.1.6)
 

`assign`(Diag, Matrix(3, 3, [[-.884017, 0, 0], [0, .794868, 0], [0, 0, -9.83284]])) 

Matrix(%id = 170085056) (4.1.7)
 

 

Since f(x, y) is a symmetric function, we see that the vector V(x) is the same as W(y) with y replaced by x and the matrix U is simply the transpose of L . 

 

With the above definitions for the matrices and vectors, the tensor product series s[3](x, y) of Example 3.1 is as follows where, for illustration, we show s[1](x, y) and s[2](x, y) as well as s[3](x, y). 

 

Rank 1 approximation: 

`assign`(s[1], `*`(Vx[1], `*`(U[1, 1], `*`(Diag[1, 1], `*`(L[1, 1], `*`(Wy[1])))))) 

`+`(`-`(`*`(.884017, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(1, x)), `*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(1, y))))))))) (4.1.8)
 

 

Rank 2 approximation: 

`assign`(s[2], Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(`^`(Vx[1 .. 2], %T), U[1 .. 2, 1 .. 2]), Diag[1 .. 2, 1 .. 2]), L[1 .... 

`+`(`*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(1, y)), `*`(`+`(`-`(`*`(.702679, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(1, x)))))), `*`(.379658, `*`(cos(x))))))), `*`(cos(y), `*`(`+`(`*`(.379658, `*`(exp(`*`(`...
`+`(`*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(1, y)), `*`(`+`(`-`(`*`(.702679, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(1, x)))))), `*`(.379658, `*`(cos(x))))))), `*`(cos(y), `*`(`+`(`*`(.379658, `*`(exp(`*`(`...
(4.1.9)
 

 

Rank 3 approximation: 

`assign`(s[3], Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(`^`(Vx, %T), U), Diag), L), Wy)) 

`+`(`*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(1, y)), `*`(`+`(`-`(`*`(1.95289, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(1, x)))))), `-`(`*`(1.80587, `*`(cos(x)))), `*`(3.50615, `*`(exp(`+`(`*`(.379870, `*`(`^`...
`+`(`*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(1, y)), `*`(`+`(`-`(`*`(1.95289, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(1, x)))))), `-`(`*`(1.80587, `*`(cos(x)))), `*`(3.50615, `*`(exp(`+`(`*`(.379870, `*`(`^`...
`+`(`*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(1, y)), `*`(`+`(`-`(`*`(1.95289, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(1, x)))))), `-`(`*`(1.80587, `*`(cos(x)))), `*`(3.50615, `*`(exp(`+`(`*`(.379870, `*`(`^`...
`+`(`*`(exp(`*`(`^`(y, 2))), `*`(cos(`+`(1, y)), `*`(`+`(`-`(`*`(1.95289, `*`(exp(`*`(`^`(x, 2))), `*`(cos(`+`(1, x)))))), `-`(`*`(1.80587, `*`(cos(x)))), `*`(3.50615, `*`(exp(`+`(`*`(.379870, `*`(`^`...
(4.1.10)
 

 

Note: The size of the expressions representing tensor product approximations grows exponentially as the number of terms increases, if we form the explicit expressions. However, by maintaining the representation in matrix-vector form we avoid this expression growth. In our applications, the expressions in x and y (which appear only in the vectors V(x) and W(y)) will be evaluated to numerical values before performing the matrix-vector multiplications. 

 

Special case: Symmetric functions 

As discussed in our ISSAC'05 paper, it is worth treating the case of symmetric functions specially. We note the following properties. 

 

 

 

 

s[n](x, y) = `*`(`^`(V(x), T), `*`(`^`(L, T), `*`(P, `*`(D, `*`(L, `*`(V(y))))))) 

where 

 

 

 

 

Benefits of the matrix-vector representation 

 

 

 

 

Application to Multidimensional Integration 

 

 

 

 

 . 

 

 

 

 

 

 

 

 

 

Transformation to a Canonical 2-D Integral 

  • An integral with non-constant limits can be converted to a new integral with constant limits via a change of variables.
 

  • Specifically,
 

 

using the change of variables: 

x = `+`(`*`(a, `*`(`+`(1, `-`(s)))), `*`(b, `*`(s))) ;    y = `+`(`*`(c(x(s)), `*`(`+`(1, `-`(t)))), `*`(d(x(s)), `*`(t))) 

 

  • Therefore limit attention to integrals on the unit square.
 

  • Making the integrand symmetric: For an integral on the unit square, we can always make the integrand symmetric, meaning
 

`≡`(f(x, y), f(y, x)) . 

 

  • Namely,  f = `+`(f[S], f[A])  (symmetric part + anti-symmetric part) where
 

f[S](x, y) = `*`(`+`(f(x, y), f(y, x)), `/`(1, 2)) ;     

 

  • Integral of  f[A] over the unit square is 0, which gives us:
 

Int(Int(f(x, y), y = 0 .. 1), x = 0 .. 1) = Int(Int(f[S](x, y), y = 0 .. 1), x = 0 .. 1) . 

 

  • Therefore we always have a symmetric integrand on the unit square.
 

 

Example 5.1 

 

Consider the double integral 

Int(Int(`*`(exp(`*`(`^`(x, 2), `*`(`^`(y, 2)))), `*`(cos(`+`(x, y)))), y = 0 .. `+`(1, `-`(x))), x = 0 .. 1) 

 

Applying the change of variables to transform to the unit square, and then applying symmetrization, yields the new problem: 

Int(Int(F(s, t), t = 0 .. 1), s = 0 .. 1) 

where 

F(s, t) = `+`(`*`(`/`(1, 2), `*`(cos(`+`(s, t, `-`(`*`(s, `*`(t))))), `*`(`+`(`*`(`+`(1, `-`(s)), `*`(exp(`*`(`^`(s, 2), `*`(`^`(`+`(1, `-`(s)), 2), `*`(`^`(t, 2))))))), `*`(`+`(1, `-`(t)), `*`(exp(`*... 

 

Result of computation: 

  • Geddes-Newton series for F(s, t)was computed to 3 terms.
 

  • The series was integrated (applying one-dimensional quadratures).
 

  • Computed result:  0.385433
 

  • Result is correct to 5 significant digits (with only 3 terms).
 

 

Computational Results in 2-D 

 

The results in the following table are for the 2-D integral: 

Int(Int(sin(`+`(`*`(8, `*`(Pi, `*`(x, `*`(`+`(1, `-`(x)), `*`(y, `*`(`+`(1, `-`(y)), `*`(`^`(`+`(x, `-`(y)), 2)))))))))), y = 0 .. 1), x = 0 .. 1) 

 

Tol 

Time 

Result 

Pts 

RelErr 

`*`(5, `^`(10, -10)) 

0.3 

0.069551393139 

15 

`*`(2.7, `^`(10, -13)) 

`*`(5, `^`(10, -15)) 

3.7 

0.06955139313890799 

21 

`*`(1.9, `^`(10, -19)) 

`*`(5, `^`(10, -20)) 

5.3 

0.0695513931389079901727 

26 

`*`(9.9, `^`(10, -24)) 

`*`(5, `^`(10, -25)) 

7.2 

0.069551393138907990172712980 

33 

`*`(4.5, `^`(10, -31)) 

`*`(5, `^`(10, -30)) 

14.2 

0.06955139313890799017271297964487 

38 

`*`(2.1, `^`(10, -35)) 

  • Tol = requested accuracy tolerance
 

  • Time = CPU time in seconds
 

  • Result = computed value of the integral (to the number of digits requested)
 

  • Pts = number of splitting points
 

  • RelErr = relative error, compared with an accurate reference value
 

Integration in High Dimensions: A Brief Summary 

 

  • As presented in our ISSAC'05 paper, we have extended our hybrid symbolic-numeric integration method to families of analytic integrands with special algebraic structure over hyperrectangular regions in higher dimensions.
 

  • The method is based on recursive dimension-halving via separation of variables.
 

  • Multidimensional integrals are reduced to one-dimensional integrals using Geddes-Newton series expansions to generate accurate tensor-product approximations of the integrand.
 

  • Test results show that our new adaptive integration algorithm is effective both with high accuracy and in high dimensions.
 

 

Bivariate Functions: The need for increased evaluation speed 

 

 

 

Example 6.1: Potential SpeedUp for a Bessel function 

 

Consider Y[nu](z), the Bessel function of the second kind. 

 

`assign`(infolevel[`evalf/int`], 1); -1 

`assign`(Digits, 10); -1 

`assign`(f, proc (nu, z) options operator, arrow; BesselY(nu, z) end proc); -1 

`assign`(intBesselY, Int(Int(f(nu, z), z = 1 .. 10), nu = 0 .. 4)) 

Int(Int(BesselY(nu, z), z = 1 .. 10), nu = 0 .. 4) (6.1.1)
 

`assign`(st, time()); -1 

evalf(intBesselY) 

 

trying DCUHRE (TOMS algorithm 698)
 

cuhre: evalhf callback failed, trying evalf callback
 

cuhre: result=-8.64489683576170798
-8.644896836 (6.1.2)
 

`assign`(intBesselTime, `+`(time(), `-`(st))); -1 

'intBesselTime' = intBesselTime 

intBesselTime = 26.672 (6.1.3)
 

 

The user information printed above identifies the cause of the slow computation: 

"evalhf callback failed" 

meaning that the function BesselYis unknown to the numerical library in C. Every function evaluation must be performed by calling back to Maple in software floating point mode rather than proceeding completely in hardware floating point. 

 

`assign`(resultCompare, Int(Int(`/`(`*`(cos(`+`(x, y)), `*`(exp(`+`(`-`(`*`(`^`(x, 2), `*`(y))))))), `*`(GAMMA(`+`(x, y)))), y = 1 .. 10), x = 0 .. 4)) 

Int(Int(`/`(`*`(cos(`+`(x, y)), `*`(exp(`+`(`-`(`*`(`^`(x, 2), `*`(y))))))), `*`(GAMMA(`+`(x, y)))), y = 1 .. 10), x = 0 .. 4) (6.1.4)
 

`assign`(st, time()); -1 

evalf(resultCompare) 

 

trying DCUHRE (TOMS algorithm 698)
 

cuhre: result=-.570570460032231774
-.5705704600 (6.1.5)
 

`assign`(intCompareTime, `+`(time(), `-`(st))); -1 

'intCompareTime' = intCompareTime 

intCompareTime = .109 (6.1.6)
 

SpeedUp = `/`(`*`(intBesselTime), `*`(intCompareTime)) 

SpeedUp = 244.6972477 (6.1.7)
 

`assign`(infolevel[`evalf/int`], 0); -1 

 

By developing appropriate approximations for Y[nu](z) = BesselY(nu, z) as a function of two arguments, the speed of the numerical integration computation with integrand BesselY(v, z) should be nearly as fast as the latter integral computation. 

 

 

 

 

Example 6.2: A function defined by a nonlinear BVP 

 

Consider the function H(alpha, x)defined by the following nonlinear boundary value problem on `and`(`<=`(0, x), `<=`(x, 1)) . 

 

`assign`(diffeq, `+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0) 

`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0 (6.2.1)
 

`assign`(initialconditions, (D(y))(0) = 1, y(1) = -1) 

(D(y))(0) = 1, y(1) = -1 (6.2.2)
 

 

Define the function H(alpha, x) by a procedure which numerically solves the BVP to obtain the function value. 

 

`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
 

`assign`(st, time()); -1 

plot3d(('H')(alpha, x), alpha = -2 .. 2, x = 0 .. 1, axes = boxed) 

Plot_2d
 

`assign`(plotTime, `+`(time(), `-`(st))); -1 

'plotTime' = plotTime 

plotTime = 36.672 (6.2.3)
 

 

The computation is slow because each new evaluation point requires the numerical solution of a boundary value problem. 

 

 

 

 

Example 6.3: BesselY near z=0 

 

Consider again Y[nu](z), the Bessel function of the second kind as in Example 6.1, and consider the series expansion first for the case when the order nu is a non-integer. 

 

`assign`(f, proc (nu, z) options operator, arrow; BesselY(nu, z) end proc); -1 

`assign`(nu, `/`(1, 4)); -1 

series(f(nu, z), z = 0, 4) 

`+`(`-`(`/`(`*`(`^`(2, `/`(3, 4))), `*`(GAMMA(`/`(3, 4)), `*`(`^`(z, `/`(1, 4)))))), `/`(`*`(2, `*`(`^`(2, `/`(1, 4)), `*`(GAMMA(`/`(3, 4)), `*`(`^`(z, `/`(1, 4)))))), `*`(Pi)), `/`(`*`(`/`(1, 3), `*`... (6.3.1)
 

 

In general for any order nu, the expansion about z = 0 starts with a term in `^`(z, `+`(`-`(nu))) . 

 

`assign`(nu, 3); -1 

series(f(nu, z), z = 0, 5) 

series(`+`(`-`(`/`(`*`(`/`(`*`(16), `*`(Pi))), `*`(`^`(z, 3)))), `-`(`/`(`*`(`/`(`*`(2), `*`(Pi))), `*`(z))), `-`(`*`(`/`(`*`(`/`(1, 4)), `*`(Pi)), `*`(z))), `*`(`+`(`/`(`*`(`/`(1, 24), `*`(`+`(`-`(ln... (6.3.2)
 

 

Fact: For any order nu, as proc (z) options operator, arrow; LinearAlgebra:-Transpose(0) end proc. 

 

In the overall approximation scheme for this function, evaluation at points nu, z where z is close to 0 will be handled by summing the series expansion. 

 

For our general bivariate approximation scheme based on tensor product series, we will restrict the region away from z = 0. In other words, the restriction is `>=`(z, zmin) where, for example, we may choose zmin = `/`(1, 10) . 

 

Still, polynomial approximation requires a high degree due to the nearby singularity. 

 

`assign`(zmin, `/`(1, 10)); -1 

 

`assign`(hfDigits, trunc(evalhf(Digits))) 

14 (6.3.3)
 

`assign`(hfeps, Float(5, `+`(`-`(hfDigits)))) 

0.5e-13 (6.3.4)
 

 

`assign`(Digits, `+`(hfDigits, 4)) 

18 (6.3.5)
 

`assign`(nu, `/`(1, 4)); -1 

with(numapprox); -1 

`assign`(chebseries, chebyshev(f(nu, z), z = zmin .. 1, hfeps)); -1 

`assign`(deg, chebdeg(chebseries)) 

44 (6.3.6)
 

 

The chebyshevcommand has computed the "infinite" Chebyshev series, truncated where the magnitude of the terms becomes smaller than the specified precision hfeps. Thus the degree obtained is a measure of the "approximation degree" required for the given function. 

 

Multiplication by `^`(z, nu) yields a function which is finite at z = 0 . 

 

`assign`(fnew, `*`(`^`(z, nu), `*`(f(nu, z)))) 

`*`(`^`(z, `/`(1, 4)), `*`(BesselY(`/`(1, 4), z))) (6.3.7)
 

series(fnew, z = 0, 4) 

`+`(`-`(`/`(`*`(`^`(2, `/`(3, 4))), `*`(GAMMA(`/`(3, 4))))), `/`(`*`(2, `*`(`^`(2, `/`(1, 4)), `*`(GAMMA(`/`(3, 4)), `*`(`^`(z, `/`(1, 2)))))), `*`(Pi)), `/`(`*`(`/`(1, 3), `*`(`^`(2, `/`(3, 4)), `*`(... (6.3.8)
 

`assign`(chebseries, chebyshev(fnew, z = zmin .. 1, hfeps)); -1 

`assign`(deg, chebdeg(chebseries)) 

38 (6.3.9)
 

 

This achieves some reduction in the degree required for polynomial approximation. 

 

However, the series expansion reveals a square root singularity at z = 0 . For the case of integer order nu, the series expansion reveals a logarithmic singularity at z = 0 . In both cases, rational approximation will be advantageous. 

 

Trying various rational degrees m, n decreasing from degree 19, 19, we find that degree 15, 15 is sufficient to achieve the desired accuracy hfeps . 

 

`assign`(m, n, 15, 15); -1 

`assign`(chebrat, chebpade(fnew, z = zmin .. 1, [m, n])); -1 

 

The resulting approximation for the function fnew on the interval [`/`(1, 10), 1] is of the form 

`assign`(chebratForm, `/`(`*`(Sum(`*`(c[k], `*`(T[k](`+`(`*`(`/`(20, 9), `*`(z)), `-`(`/`(11, 9)))))), k = 0 .. m)), `*`(Sum(`*`(d[k], `*`(T[k](`+`(`*`(`/`(10, 9), `*`(z)), `-`(`/`(11, 9)))))), k = 0 ... 

`/`(`*`(Sum(`*`(c[k], `*`(T[k](`+`(`*`(`/`(20, 9), `*`(z)), `-`(`/`(11, 9)))))), k = 0 .. 15)), `*`(Sum(`*`(d[k], `*`(T[k](`+`(`*`(`/`(10, 9), `*`(z)), `-`(`/`(11, 9)))))), k = 0 .. 15))) (6.3.10)
 

for numerator coefficients c[k] and denominator coefficients d[k] . 

 

The approximation for the original function f(nu, z) = Y[nu](z) on the interval [`/`(1, 10), 1] is therefore 

`*`(`^`(z, `+`(`-`(nu))), `*`(chebrat)) . 

 

It can be verified that the maximum error abs(`+`(f(nu, z), `-`(`*`(`^`(z, `+`(`-`(nu))), `*`(chebrat))))) is less than hfeps = `*`(5.0, `^`(10, -14)) . 

 

Approximation of Bivariate Functions via Tensor Product Series 

Given a bivariate function f(x, y) and a region of interest R = Typesetting:-delayCrossProduct([a, b], [c, d]), we wish to generate a numerical evaluation routine fapprox such that 

 

 

 

where hftol is the unit roundoff error for a specified hardware floating point precision. Moreover, the routine fapprox must be "purely numerical" in the sense that it can be translated into a language such as C and can be compiled for inclusion in a numerical library. 

 

The above specification implies the need to generate numerical approximations for f(x, y). By using tensor product series approximations, the bivarate approximation problem is reduced to a sequence of univariate approximation problems and then some well-known methods for univariate approximation are applied. 

 

As will be seen, our algorithm automatically subdivides the region R into subregions based on specified limits for the number of terms allowed in a tensor product series and for the degrees of the univariate approximations. Let us first treat a case where no subdivisions of R are required. 

Example 7.1: Tensor Product Series Approximation in a Region 

Consider the function f(nu, z) = Y[nu](z), the Bessel function of the second kind, and suppose that we approximate this function in the region R = Typesetting:-delayCrossProduct([1, 2], [`/`(1, 4), `/`(1, 2)]) , in other words `and`(`<=`(1, nu), `<=`(nu, 2)), `and`(`<=`(`/`(1, 4), z), `<=`(z, `/`(1, 2))) . This is a small region and no subdivisions of R are required. 

 

The algorithm computes a tensor product series approximation based on the following distribution of splitting points. 

 

Plot_2d
 

 

The matrix-vector representation of the tensor product series is as follows (carrying only 6 digits for this illustration). 

 

`assign`(Digits, 6); -1 

`assign`(f, proc (nu, z) options operator, arrow; BesselY(nu, z) end proc); -1 

`assign`(W, Vector([seq(f(a[i], z), i = 1 .. nTerms)])) 

Vector[column](%id = 353381416) (7.1.1)
 

`assign`(V, Vector([seq(f(nu, b[i]), i = 1 .. nTerms)])) 

Vector[column](%id = 352676112) (7.1.2)
 

`assign`(L, Matrix(nTerms, nTerms, [[1., 0., 0., 0., 0., 0., 0., 0., 0.], [-.228416, 1., 0., 0., 0., 0., 0., 0., 0.], [-.448593, -.685157, 1., 0., 0., 0., 0., 0., 0.], [-.206213, -1.38167, .646485, 1....
`assign`(L, Matrix(nTerms, nTerms, [[1., 0., 0., 0., 0., 0., 0., 0., 0.], [-.228416, 1., 0., 0., 0., 0., 0., 0., 0.], [-.448593, -.685157, 1., 0., 0., 0., 0., 0., 0.], [-.206213, -1.38167, .646485, 1....
`assign`(L, Matrix(nTerms, nTerms, [[1., 0., 0., 0., 0., 0., 0., 0., 0.], [-.228416, 1., 0., 0., 0., 0., 0., 0., 0.], [-.448593, -.685157, 1., 0., 0., 0., 0., 0., 0.], [-.206213, -1.38167, .646485, 1....
`assign`(L, Matrix(nTerms, nTerms, [[1., 0., 0., 0., 0., 0., 0., 0., 0.], [-.228416, 1., 0., 0., 0., 0., 0., 0., 0.], [-.448593, -.685157, 1., 0., 0., 0., 0., 0., 0.], [-.206213, -1.38167, .646485, 1....
`assign`(L, Matrix(nTerms, nTerms, [[1., 0., 0., 0., 0., 0., 0., 0., 0.], [-.228416, 1., 0., 0., 0., 0., 0., 0., 0.], [-.448593, -.685157, 1., 0., 0., 0., 0., 0., 0.], [-.206213, -1.38167, .646485, 1....
`assign`(L, Matrix(nTerms, nTerms, [[1., 0., 0., 0., 0., 0., 0., 0., 0.], [-.228416, 1., 0., 0., 0., 0., 0., 0., 0.], [-.448593, -.685157, 1., 0., 0., 0., 0., 0., 0.], [-.206213, -1.38167, .646485, 1....
`assign`(L, Matrix(nTerms, nTerms, [[1., 0., 0., 0., 0., 0., 0., 0., 0.], [-.228416, 1., 0., 0., 0., 0., 0., 0., 0.], [-.448593, -.685157, 1., 0., 0., 0., 0., 0., 0.], [-.206213, -1.38167, .646485, 1....
 

Matrix(%id = 400178544)
Matrix(%id = 400178544)
Matrix(%id = 400178544)
Matrix(%id = 400178544)
Matrix(%id = 400178544)
Matrix(%id = 400178544)
Matrix(%id = 400178544)
Matrix(%id = 400178544)
Matrix(%id = 400178544)
(7.1.3)
 

`assign`(U, Matrix(nTerms, nTerms, [[1., -.262852, -.420037, .112824, -.267711, 0.668780e-1, -.218056, 0.699279e-1, -0.280877e-1], [0., 1., -.707500, -.549318, -0.144255e-1, -.325081, -.134516, -0.860...
`assign`(U, Matrix(nTerms, nTerms, [[1., -.262852, -.420037, .112824, -.267711, 0.668780e-1, -.218056, 0.699279e-1, -0.280877e-1], [0., 1., -.707500, -.549318, -0.144255e-1, -.325081, -.134516, -0.860...
`assign`(U, Matrix(nTerms, nTerms, [[1., -.262852, -.420037, .112824, -.267711, 0.668780e-1, -.218056, 0.699279e-1, -0.280877e-1], [0., 1., -.707500, -.549318, -0.144255e-1, -.325081, -.134516, -0.860...
`assign`(U, Matrix(nTerms, nTerms, [[1., -.262852, -.420037, .112824, -.267711, 0.668780e-1, -.218056, 0.699279e-1, -0.280877e-1], [0., 1., -.707500, -.549318, -0.144255e-1, -.325081, -.134516, -0.860...
`assign`(U, Matrix(nTerms, nTerms, [[1., -.262852, -.420037, .112824, -.267711, 0.668780e-1, -.218056, 0.699279e-1, -0.280877e-1], [0., 1., -.707500, -.549318, -0.144255e-1, -.325081, -.134516, -0.860...
`assign`(U, Matrix(nTerms, nTerms, [[1., -.262852, -.420037, .112824, -.267711, 0.668780e-1, -.218056, 0.699279e-1, -0.280877e-1], [0., 1., -.707500, -.549318, -0.144255e-1, -.325081, -.134516, -0.860...
`assign`(U, Matrix(nTerms, nTerms, [[1., -.262852, -.420037, .112824, -.267711, 0.668780e-1, -.218056, 0.699279e-1, -0.280877e-1], [0., 1., -.707500, -.549318, -0.144255e-1, -.325081, -.134516, -0.860...
 

Matrix(%id = 313099568)
Matrix(%id = 313099568)
Matrix(%id = 313099568)
Matrix(%id = 313099568)
Matrix(%id = 313099568)
Matrix(%id = 313099568)
Matrix(%id = 313099568)
Matrix(%id = 313099568)
Matrix(%id = 313099568)
Matrix(%id = 313099568)
(7.1.4)
 

`assign`(Diag, Matrix(nTerms, nTerms, Vector([-0.483062e-1, -1.23405, -22.0997, -1428.50, 13891.2, `+`(`*`(4.16472, `*`(`^`(10, 6)))), `+`(`-`(`*`(3.93250, `*`(`^`(10, 8))))), `+`(`*`(2.87119, `*`(`^`...
`assign`(Diag, Matrix(nTerms, nTerms, Vector([-0.483062e-1, -1.23405, -22.0997, -1428.50, 13891.2, `+`(`*`(4.16472, `*`(`^`(10, 6)))), `+`(`-`(`*`(3.93250, `*`(`^`(10, 8))))), `+`(`*`(2.87119, `*`(`^`...
 

Matrix(%id = 342083952)
Matrix(%id = 342083952)
Matrix(%id = 342083952)
Matrix(%id = 342083952)
Matrix(%id = 342083952)
Matrix(%id = 342083952)
Matrix(%id = 342083952)
Matrix(%id = 342083952)
Matrix(%id = 342083952)
(7.1.5)
 

 

If one wanted the explict expression for the tensor product series s[9](nu, z) it is given by 

`assign`(s[9], Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(`^`(V, %T), U), Diag), L), W)); -1 

 

However the expression is quite large and, more significantly, with the explicit expression it is impossible to evaluate to full accuracy in hardware floating point precision. The problem can be seen by the large diagonal elements appearing above, which produce a sum of terms with large values adding up to a relatively small final answer (a classical case of unstable numerical computation!). 

 

Fortunately, by maintaining the matrix-vector formulation we obtain a stable evaluation algorithm as will be seen. 

 

It can be verified that (when the digits are carried to hardware floating point precision): 

 

 

 

Example 7.2: Approximating the Univariate Functions 

Continuing with the approximation problem of Example 7.1, we must now compute approximations for each of the univariate functions appearing in the vectors V and W. 

 

'V' = V 

V = Vector[column](%id = 374270808) (7.2.1)
 

'W' = W 

W = Vector[column](%id = 313043416) (7.2.2)
 

 

The approach taken for each univariate function is to compute a Chebyshev series on the specified interval, truncated to accuracy hftol. This determines the approximation degree required for the particular function and if this degree is too large (greater than degree 25) for any of the univariate functions then the corresponding interval is subdivided and the process is repeated. 

 

Since the degrees are not too large in this case, no subdivisions are required. Noting that rational approximations are more efficient than polynomial approximations for some functions, the algorithm will compute Chebyshev-Pade approximations if they lead to approximations of smaller total degree. 

 

For example, consider the entry W[4] and compute a Chebyshev series to the specified accuracy. Recall from Example 6.3 that for this function it is helpful to factor out a power of z due to the singularity at z = 0 and our algorithm will do so. 

 

`assign`(f, W[4]) 

BesselY(1.0, z) (7.2.3)
 

series(BesselY(1, z), z = 0, 4) 

series(`+`(`-`(`/`(`*`(`/`(`*`(2), `*`(Pi))), `*`(z))), `*`(`+`(`/`(`*`(`+`(`-`(ln(2)), ln(z))), `*`(Pi)), `-`(`/`(`*`(`/`(1, 2), `*`(`+`(`-`(`*`(2, `*`(gamma))), 1))), `*`(Pi)))), `*`(z)), `*`(`+`(`-...
series(`+`(`-`(`/`(`*`(`/`(`*`(2), `*`(Pi))), `*`(z))), `*`(`+`(`/`(`*`(`+`(`-`(ln(2)), ln(z))), `*`(Pi)), `-`(`/`(`*`(`/`(1, 2), `*`(`+`(`-`(`*`(2, `*`(gamma))), 1))), `*`(Pi)))), `*`(z)), `*`(`+`(`-...
(7.2.4)
 

`assign`(fnew, `*`(z, `*`(f))) 

`*`(z, `*`(BesselY(1.0, z))) (7.2.5)
 

`assign`(Digits, trunc(evalhf(Digits))) 

14 (7.2.6)
 

`assign`(hftol, Float(5, `+`(`-`(Digits)))) 

0.5e-13 (7.2.7)
 

`assign`(cheb, numapprox:-chebyshev(fnew, z = `/`(1, 4) .. `/`(1, 2), hftol)) 

`+`(`-`(`*`(.70606514449120, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.30030202212153e-1, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `*`(0.18974789827450e-3, `*`(T(2, `+`(`*`(8, `*`(z)), `-`(...
`+`(`-`(`*`(.70606514449120, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.30030202212153e-1, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `*`(0.18974789827450e-3, `*`(T(2, `+`(`*`(8, `*`(z)), `-`(...
`+`(`-`(`*`(.70606514449120, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.30030202212153e-1, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `*`(0.18974789827450e-3, `*`(T(2, `+`(`*`(8, `*`(z)), `-`(...
`+`(`-`(`*`(.70606514449120, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.30030202212153e-1, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `*`(0.18974789827450e-3, `*`(T(2, `+`(`*`(8, `*`(z)), `-`(...
`+`(`-`(`*`(.70606514449120, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.30030202212153e-1, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `*`(0.18974789827450e-3, `*`(T(2, `+`(`*`(8, `*`(z)), `-`(...
`+`(`-`(`*`(.70606514449120, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.30030202212153e-1, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `*`(0.18974789827450e-3, `*`(T(2, `+`(`*`(8, `*`(z)), `-`(...
`+`(`-`(`*`(.70606514449120, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.30030202212153e-1, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `*`(0.18974789827450e-3, `*`(T(2, `+`(`*`(8, `*`(z)), `-`(...
(7.2.8)
 

`assign`(deg, numapprox:-chebdeg(cheb)) 

13 (7.2.9)
 

 

This polynomial approximation is of degree 13. By trying successive Chebyshev-Pade approximants of degrees [6, 6], [5, 5], () .. () decreasing until the maximum error is too large, the algorithm finds that the [5, 5] approximant (for total degree 10) achieves the desired accuracy. 

 

`assign`(chebpad, numapprox:-chebpade(fnew, z = `/`(1, 4) .. `/`(1, 2), [5, 5])) 

`/`(`*`(`+`(`-`(`*`(.71292234249071, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(.35301097668164, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.27455984757726e-1, `*`(T(2, `+`(`*`(8, `*`(z...
`/`(`*`(`+`(`-`(`*`(.71292234249071, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(.35301097668164, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.27455984757726e-1, `*`(T(2, `+`(`*`(8, `*`(z...
`/`(`*`(`+`(`-`(`*`(.71292234249071, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(.35301097668164, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.27455984757726e-1, `*`(T(2, `+`(`*`(8, `*`(z...
`/`(`*`(`+`(`-`(`*`(.71292234249071, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(.35301097668164, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.27455984757726e-1, `*`(T(2, `+`(`*`(8, `*`(z...
`/`(`*`(`+`(`-`(`*`(.71292234249071, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(.35301097668164, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.27455984757726e-1, `*`(T(2, `+`(`*`(8, `*`(z...
`/`(`*`(`+`(`-`(`*`(.71292234249071, `*`(T(0, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(.35301097668164, `*`(T(1, `+`(`*`(8, `*`(z)), `-`(3)))))), `-`(`*`(0.27455984757726e-1, `*`(T(2, `+`(`*`(8, `*`(z...
(7.2.10)
 

 

Rational approximation is advantageous over polynomial approximation for this function because, as discussed in Example 6.3, even though we have factored out a power of zto improve the leading term of the series expansion about z = 0, it remains the case that fnew has a logarithmic singularity at z = 0. The left endpoint of the interval of approximation is z = `/`(1, 4) which means that the singularity is "nearby". 

 

series(`*`(z, `*`(BesselY(1, z))), z = 0, 4) 

series(`+`(`-`(`/`(`*`(2), `*`(Pi))), `*`(`+`(`/`(`*`(`+`(`-`(ln(2)), ln(z))), `*`(Pi)), `-`(`/`(`*`(`/`(1, 2), `*`(`+`(`-`(`*`(2, `*`(gamma))), 1))), `*`(Pi)))), `*`(`^`(z, 2))))+O(`^`(z, 4)),z,4) (7.2.11)
 

 

Noting that the approximation chebpad has been computed for the function fnew, the final approximation for the original function f is 

 

`/`(`*`(chebpad), `*`(z)) 

 

In general, Chebyshev-Pade approximants will be used in this manner for the functions of z appearing in the vector W. In contrast, for the functions of nu in the vector V the algorithm finds that rational approximation offers no advantage and the truncated Chebyshev series is used. 

 

For example, consider  

 

V[4] 

BesselY(nu, .411458) (7.2.12)
 

`assign`(cheb, numapprox:-chebyshev(V[4], nu = 1 .. 2, hftol)) 

`+`(`-`(`*`(4.0128221708880, `*`(T(0, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(2.9149636786985, `*`(T(1, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(.76561216884141, `*`(T(2, `+`(`*`(2, `*`(nu)), `-`(3...
`+`(`-`(`*`(4.0128221708880, `*`(T(0, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(2.9149636786985, `*`(T(1, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(.76561216884141, `*`(T(2, `+`(`*`(2, `*`(nu)), `-`(3...
`+`(`-`(`*`(4.0128221708880, `*`(T(0, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(2.9149636786985, `*`(T(1, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(.76561216884141, `*`(T(2, `+`(`*`(2, `*`(nu)), `-`(3...
`+`(`-`(`*`(4.0128221708880, `*`(T(0, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(2.9149636786985, `*`(T(1, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(.76561216884141, `*`(T(2, `+`(`*`(2, `*`(nu)), `-`(3...
`+`(`-`(`*`(4.0128221708880, `*`(T(0, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(2.9149636786985, `*`(T(1, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(.76561216884141, `*`(T(2, `+`(`*`(2, `*`(nu)), `-`(3...
`+`(`-`(`*`(4.0128221708880, `*`(T(0, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(2.9149636786985, `*`(T(1, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(.76561216884141, `*`(T(2, `+`(`*`(2, `*`(nu)), `-`(3...
`+`(`-`(`*`(4.0128221708880, `*`(T(0, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(2.9149636786985, `*`(T(1, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(.76561216884141, `*`(T(2, `+`(`*`(2, `*`(nu)), `-`(3...
`+`(`-`(`*`(4.0128221708880, `*`(T(0, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(2.9149636786985, `*`(T(1, `+`(`*`(2, `*`(nu)), `-`(3)))))), `-`(`*`(.76561216884141, `*`(T(2, `+`(`*`(2, `*`(nu)), `-`(3...
(7.2.13)
 

`assign`(deg, numapprox:-chebdeg(cheb)) 

15 (7.2.14)
 

 

One finds that rational approximations offer no advantage for the univariate functions of nu meaning that these functions are "polynomial-like". 

 

Remark on Data Structures 

For each univariate function appearing in V and in W, we must store information about the corresponding polynomial or rational approximation. The most general form of approximation to be stored is the form appearing for the function BesselY(1.0, z) considered above, which is of the form: 

 

`≈`(BesselY(1.0, z), `*`(`^`(z, tau), `*`(chebratForm))) 

 

where tau = -1.0 in that particular example and where 

 

chebratForm = `/`(`*`(Sum(`*`(c[k], `*`(T[k](`+`(`*`(alpha, `*`(z)), beta)))), k = 0 .. m)), `*`(Sum(`*`(d[k], `*`(T[k](`+`(`*`(alpha, `*`(z)), beta)))), k = 0 .. n))) 

chebratForm = `/`(`*`(Sum(`*`(c[k], `*`(T[k](`+`(`*`(alpha, `*`(z)), beta)))), k = 0 .. m)), `*`(Sum(`*`(d[k], `*`(T[k](`+`(`*`(alpha, `*`(z)), beta)))), k = 0 .. n))) (7.3.1)
 

 

Therefore for each entry in V and in W, we store a vector containing the following purely numerical information: 

 

[tau, alpha, beta, c[0], c[1], () .. (), c[m], d[0], d[1], () .. (), d[n]] 

 

Evaluation of the approximation at a point z = z[j] requires evaluation of the numerator and denominator polynomials in Chebyshev form. Since there are several such polynomials in Chebyshev form (the numerator and denominator in the approximations for each entry in W, for example), it can be advantageous to use a vectorized form of the evaluation algorithm. 

 

Similarly, there would be approximations of the same form for the univariate functions of ν appearing in the vector V. As noted above, in some cases the approximation will be a polynomial rather than a rational function, in which case the denominator is simply 1. 

 

Example 7.4: Stable Evaluation of the Matrix-Vector Form 

Suppose that we have stored polynomial or rational approximations for each univariate function appearing in the vectors V and W of the preceding Example. Given our overall approximation of the function f(nu, z) = Y[nu](z) in the region R = Typesetting:-delayCrossProduct([1, 2], [`/`(1, 4), `/`(1, 2)]), suppose that we wish to evaluate at the point 1.3, .4. 

 

Since we have stored accurate approximations for the univariate functions, we will evaluate the vectors V and W to the following values (for simplicity here, we just enter the correct numerical values). 

 

We will use Digits = 6 here for illustration, but we would normally work with hardware floating point precision. Also, for this precision the number of terms required would be nTerms = 5. 

 

`assign`(Digits, 6); -1 

`assign`(UseHardwareFloats, false); -1 

`assign`(nTerms, 5); -1 

`assign`(V, V[1 .. nTerms]); -1; `assign`(W, W[1 .. nTerms]); -1 

`assign`(L, L[1 .. nTerms, 1 .. nTerms]); -1; `assign`(U, U[1 .. nTerms, 1 .. nTerms]); -1 

`assign`(Diag, Diag[1 .. nTerms, 1 .. nTerms]); -1 

`assign`(`νval`, zval, 1.3, .4); -1 

`assign`(V, evalf(eval(V, nu = `νval`))) 

Vector[column](%id = 357739360) (7.4.1)
 

`assign`(W, evalf(eval(W, z = zval))) 

Vector[column](%id = 313043144) (7.4.2)
 

 

Then the tensor product series approximation in the region R has the following matrix-vector representation which can be evaluated to yield the desired numerical value. 

 

`assign`(fapprox, Typesetting:-delayDotProduct(`^`(V, %T), Typesetting:-delayDotProduct(U, Typesetting:-delayDotProduct(Diag, Typesetting:-delayDotProduct(L, W))))) 

-2.53927 (7.4.3)
 

`assign`(correctValue, evalf(BesselY(`νval`, zval))) 

-2.53928 (7.4.4)
 

 

Note that Diag has entries which grow large and this can cause unstable numerical evaluation unless we choose the correct order of evaluation. The bracketing above specifies the correct order of evaluation. 

 

Diag 

Matrix(%id = 313043008) (7.4.5)
 

 

The following is an example of unstable evaluation. 

Suppose that we first compute the product of the three matrices. 

 

`assign`(UDiagL, Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(U, Diag), L)) 

Matrix(%id = 313042872) (7.4.6)
 

`assign`(fapprox, Typesetting:-delayDotProduct(Typesetting:-delayDotProduct(`^`(V, %T), UDiagL), W)) 

-.862880 (7.4.7)
 

correctValue 

-2.53928 (7.4.8)
 

This result has no correct digits! 

 

Fortunately, using the right-to-left evaluation of the matrix-vector form can be shown to be numerically stable. 

 

Example 7.5: Approximation in a Large Region 

Consider once again the Bessel function of the second kind, f(nu, z) = Y[nu](z), but this time in a larger region R = Typesetting:-delayCrossProduct([0, 5], [`/`(1, 4), 5]) . Our algorithm was run for this example and the region was subdivided based on the requirements: 

 

`<=`(nTerms, 25) and `<=`(degree, 25) 

 

where nTerms is the number of terms in a tensor product series expansion and degree refers to the degree of the polynomials appearing in the numerator or denominator of the univariate approximations. 

 

The algorithm produces a hash function such that, given an evaluation point nu[i], z[j] the result of hash(nu[i], z[j]) is an integer index k into the array of numerical values defining the approximation on the `^`(k, th) subregion. Specifically, the hash function for this example is the following Maple piecewise expression indicating that the region R was subdivided into 12 subregions. 

 

`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
`assign`(hash, proc (nu, z) options operator, arrow; piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and...
 

hash(nu, z) 

piecewise(`or`(`<`(nu, 0.), `<`(z, .25)), 0, `and`(`and`(`and`(`<=`(0., nu), `<=`(nu, 1.25)), `<=`(.25, z)), `<=`(z, .546875)), 1, `and`(`and`(`and`(`<=`(1.25, nu), `<=`(nu, 2.5)), `<=`(.25, z)), `<=`... (7.5.1)
 

 

For example, for evaluation at the point 1.3, .4 we get 

 

hash(1.3, .4) 

2 (7.5.2)
 

 

indicating that we must evaluate the approximation stored for the subregion indexed by 2. 

 

The 12 subregions for the above example are pictured in the plot below. 

 

Plot_2d
 

 

As can be seen, the individual subregions have index numbers ranging from 1 to 12. 

 

Timing Results for Various Bivariate Functions 

 

The following files contain tables of timing information for a selection of bivariate functions, as presented in the Masters thesis by Xiang Wang [Wang08]. 

 

Timings/BesselJ.pdf 

 

Timings/BesselY.pdf 

 

Timings/Beta.pdf 

 

Timings/JacobiSN.pdf 

 

Timings/LegendreP.pdf 

 

Timings/BVP_H.pdf  (See below for the definition of the function H(alpha, x)) 

Definition of the function BVP_H (Solution of a nonlinear BVP) 

 

The function H(alpha, x)was defined in a preceding section by the following nonlinear boundary value problem on `and`(`<=`(0, x), `<=`(x, 1)) . 

 

`assign`(diffeq, `+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0) 

`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0 (8.1.1)
 

`assign`(initialconditions, (D(y))(0) = 1, y(1) = -1) 

(D(y))(0) = 1, y(1) = -1 (8.1.2)
 

 

Define the function H(alpha, x) by a procedure which numerically solves the BVP to obtain the function value. 

 

`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
`assign`(H, proc (alpha, xval) local soln; `assign`(soln, dsolve({`+`(((`@@`(D, 2))(y))(x), `*`(alpha, `*`(abs(y(x))))) = 0, y(1) = -1, (D(y))(0) = 1}, numeric)); return eval(y(x), soln(xval)) end pro...
 

Convergence Results for the Symmetric Positive Definite Case 

 

Theoretical convergence results for the general case have yet to be developed. For a class of symmetric bivariate functions, some recently developed convergence results are summarized in the document Convergence.pdf . 

 

References 

[Carvajal04] O.A. Carvajal, A New Hybrid Symbolic-Numeric Method for Multiple Integration Based on Tensor-Product Series. Master's thesis, School of Computer Science, University of Waterloo, 2004. 

 

[CarvajalChapmanGeddes05] O.A. Carvajal, F.W. Chapman and K.O. Geddes, Hybrid symbolic-numeric integration in multiple dimensions via tensor-product series. Proc. ISSAC'05, Manuel Kauers (ed.), ACM Press, New York, 2005, pp. 84-91. 

 

[Chapman03] F.W. Chapman, Generalized Orthogonal Series for Natural Tensor Product Interpolation. Ph.D thesis, Department of Applied Mathematics, University of Waterloo, 2003. 

 

[Davis63] P.J. Davis, Interpolation and Approximation. Blaisdell, 1963. 

 

[Thomas76] D.H. Thomas, A natural tensor product interpolation formula and the pseudoinverse of a matrix. Linear Algebra and Its Applications, 1976. 

 

[Wang08] Xiang Wang, Automated Generation of Numerical Evaluation Routines for Bivariate Functions via Tensor Product Series. Master's thesis, School of Computer Science, University of Waterloo, 2008.