Fix typos

This commit is contained in:
Brian Wignall 2019-12-01 08:06:17 -05:00
parent a8660d3434
commit ccff3fd1b3
101 changed files with 133 additions and 133 deletions

View File

@ -82,7 +82,7 @@ John F Hart, Computer Approximations, (1978) ISBN 0 088275 642-7.
William J Cody, Software Manual for the Elementary Functions, Prentice-Hall (1980) ISBN 0138220646.
Nico Temme, Special Functions, An Introduction to the Classical Functions of Mathematical Physics, Wiley, ISBN: 0471-11313-1 (1996) who also gave valueable advice.
Nico Temme, Special Functions, An Introduction to the Classical Functions of Mathematical Physics, Wiley, ISBN: 0471-11313-1 (1996) who also gave valuable advice.
[@http://www.cas.lancs.ac.uk/glossary_v1.1/prob.html#probdistn Statistics Glossary], Valerie Easton and John H. McColl.

View File

@ -88,7 +88,7 @@ Which has a peak relative error of 1.2x10[super -3].
While this is a pretty good approximation already, judging by the
shape of the error function we can clearly do better. Before starting
on the Remez method propper, we have one more step to perform: locate
on the Remez method proper, we have one more step to perform: locate
all the extrema of the error function, and store
these locations as our initial ['Chebyshev control points].

View File

@ -441,7 +441,7 @@ the particular tests plus the platform and compiler:
[h4 Testing Multiprecision Types]
Testing of multiprecision types is handled by the test drivers in libs/multiprecision/test/math,
please refer to these for examples. Note that these tests are run only occationally as they take
please refer to these for examples. Note that these tests are run only occasionally as they take
a lot of CPU cycles to build and run.
[h4 Improving Compile Times]

View File

@ -733,7 +733,7 @@ and CALC100 100 decimal digit Complex Variable Calculator Program, a DOS utility
Not here in this Boost.Math collection, because physical constants:
* Are measurements, not truely constants.
* Are measurements, not truly constants.
* Are not truly constant and keeping changing as mensuration technology improves.
* Have a instrinsic uncertainty.
* Mathematical constants are stored and represented at varying precision, but should never be inaccurate.

View File

@ -16,7 +16,7 @@ using quickbook ;
#path-constant images_location : html ;
# location of SVG images referenced by Quickbook.
# screenshots installed as recomended by Sourceforge.
# screenshots installed as recommended by Sourceforge.
xml distexplorer
:

View File

@ -63,7 +63,7 @@ and are tab separated to assist input to other programs,
for example, spreadsheets or text editors.
Note: Excel (for example), only shows 10 decimal digits, by default:
to display the maximum possible precision (abotu 15 decimal digits),
to display the maximum possible precision (about 15 decimal digits),
it is necessary to format all cells to display this precision.
Although unusually accurate, not all values computed by Distexplorer will be as accurate as this.
Values shown as NaN cannot be calculated from the value(s) given,

View File

@ -164,7 +164,7 @@
</p>
<p>
Note: Excel (for example), only shows 10 decimal digits, by default: to display
the maximum possible precision (abotu 15 decimal digits), it is necessary to
the maximum possible precision (about 15 decimal digits), it is necessary to
format all cells to display this precision. Although unusually accurate, not
all values computed by Distexplorer will be as accurate as this. Values shown
as NaN cannot be calculated from the value(s) given, most commonly because the

View File

@ -26,7 +26,7 @@
}} // namespaces
The logistic distribution is a continous probability distribution.
The logistic distribution is a continuous probability distribution.
It has two parameters - location and scale. The cumulative distribution
function of the logistic distribution appears in logistic regression
and feedforward neural networks. Among other applications,

View File

@ -109,7 +109,7 @@ Denise Benton, K. Krishnamoorthy,
Computational Statistics & Data Analysis 43 (2003) 249-267.
Accuracy checks use test data computed with this
implementation and arbitary precision interval arithmetic:
implementation and arbitrary precision interval arithmetic:
this test data is believed to be accurate to at least 50
decimal places.

View File

@ -265,7 +265,7 @@ may ['sometimes] support denormals (as signalled by `std::numeric_limits<FPT>::h
currently enabled at runtime (for example on SSE hardware, the DAZ or FTZ flags will disable denormal support).
In this situation, the `ulp` function may return a value that is many orders of magnitude too large.
In light of the issues above, we recomend that:
In light of the issues above, we recommend that:
* To move between adjacent floating-point values always use __float_next, __float_prior or __nextafter (`std::nextafter`
is another candidate, but our experience is that this also often breaks depending which optimizations and

View File

@ -401,7 +401,7 @@ previous versions of `lexical_cast` using stringstream were not portable
Although other examples imbue individual streams with the new locale,
for the streams constructed inside lexical_cast,
it was necesary to assign to a global locale.
it was necessary to assign to a global locale.
locale::global(new_locale);

View File

@ -189,7 +189,7 @@
<span class="keyword">int</span> <span class="identifier">main</span><span class="special">()</span>
<span class="special">{</span>
<span class="comment">// The lithium potential is given in Kohn's paper, Table I.</span>
<span class="comment">// (We could equally easily use an unordered_map, a list of tuples or pairs, or a 2-dimentional array).</span>
<span class="comment">// (We could equally easily use an unordered_map, a list of tuples or pairs, or a 2-dimensional array).</span>
<span class="identifier">std</span><span class="special">::</span><span class="identifier">map</span><span class="special">&lt;</span><span class="keyword">double</span><span class="special">,</span> <span class="keyword">double</span><span class="special">&gt;</span> <span class="identifier">r</span><span class="special">;</span>
<span class="identifier">r</span><span class="special">[</span><span class="number">0.02</span><span class="special">]</span> <span class="special">=</span> <span class="number">5.727</span><span class="special">;</span>

View File

@ -1356,7 +1356,7 @@
<p>
g<sub>k</sub> and h<sub>k</sub>
are also computed by recursions (involving gamma functions), but
the formulas are a little complicated, readers are refered to N.M. Temme,
the formulas are a little complicated, readers are referred to N.M. Temme,
<span class="emphasis"><em>On the numerical evaluation of the ordinary Bessel function of
the second kind</em></span>, Journal of Computational Physics, vol 21, 343
(1976). Note Temme's series converge only for |&#956;| &lt;= 1/2.

View File

@ -448,7 +448,7 @@
</p>
<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
<li class="listitem">
Are measurements, not truely constants.
Are measurements, not truly constants.
</li>
<li class="listitem">
Are not truly constant and keeping changing as mensuration technology improves.

View File

@ -53,7 +53,7 @@
<span class="special">}}</span> <span class="comment">// namespaces</span>
</pre>
<p>
The logistic distribution is a continous probability distribution. It has
The logistic distribution is a continuous probability distribution. It has
two parameters - location and scale. The cumulative distribution function
of the logistic distribution appears in logistic regression and feedforward
neural networks. Among other applications, United State Chess Federation

View File

@ -426,7 +426,7 @@
Computational Statistics &amp; Data Analysis 43 (2003) 249-267.
</p>
<p>
Accuracy checks use test data computed with this implementation and arbitary
Accuracy checks use test data computed with this implementation and arbitrary
precision interval arithmetic: this test data is believed to be accurate
to at least 50 decimal places.
</p>

View File

@ -90,7 +90,7 @@
there for <code class="computeroutput"><span class="identifier">a</span> <span class="special">&lt;&lt;</span>
<span class="number">0</span></code>. On the other hand, the simple expedient
of breaking the integral into two domains: (a, 0) and (0, b) and integrating
each seperately using the tanh-sinh integrator, works just fine.
each separately using the tanh-sinh integrator, works just fine.
</p>
<p>
Finally, some endpoint singularities are too strong to be handled by <code class="computeroutput"><span class="identifier">tanh_sinh</span></code> or equivalent methods, for example

View File

@ -100,7 +100,7 @@
For example, the <code class="computeroutput"><span class="identifier">sinh_sinh</span></code>
quadrature integrates over the entire real line, the <code class="computeroutput"><span class="identifier">tanh_sinh</span></code>
over (-1, 1), and the <code class="computeroutput"><span class="identifier">exp_sinh</span></code>
over (0, &#8734;). The latter integrators also have auxilliary ranges which are
over (0, &#8734;). The latter integrators also have auxiliary ranges which are
handled via a change of variables on the function being integrated, so that
the <code class="computeroutput"><span class="identifier">tanh_sinh</span></code> can handle
integration over <span class="emphasis"><em>(a, b)</em></span>, and <code class="computeroutput"><span class="identifier">exp_sinh</span></code>

View File

@ -340,7 +340,7 @@
</td>
<td>
<p>
This is a truely horrible integral that oscillates wildly and unpredictably
This is a truly horrible integral that oscillates wildly and unpredictably
with some very sharp "spikes" in it's graph. The higher
number of levels used reflects the difficulty of sampling the more
extreme features.

View File

@ -457,7 +457,7 @@
</li>
</ol></div>
<p>
The following references, while not directly relevent to our implementation,
The following references, while not directly relevant to our implementation,
may also be of interest:
</p>
<div class="orderedlist"><ol class="orderedlist" type="1">

View File

@ -70,7 +70,7 @@
<code class="computeroutput"><span class="identifier">boost</span><span class="special">::</span><span class="identifier">math</span><span class="special">::</span><span class="identifier">binomial_coefficient</span><span class="special">(</span><span class="number">10</span><span class="special">,</span> <span class="number">2</span><span class="special">);</span></code>
</p>
<p>
You will get a compiler error, ususally indicating that there is no such
You will get a compiler error, usually indicating that there is no such
function to be found. Instead you need to specifiy the return type explicity
and write:
</p>

View File

@ -68,7 +68,7 @@
<code class="computeroutput"><span class="identifier">boost</span><span class="special">::</span><span class="identifier">math</span><span class="special">::</span><span class="identifier">double_factorial</span><span class="special">(</span><span class="number">2</span><span class="special">);</span></code>
</p>
<p>
You will get a (possibly perplexing) compiler error, ususally indicating
You will get a (possibly perplexing) compiler error, usually indicating
that there is no such function to be found. Instead you need to specifiy
the return type explicity and write:
</p>

View File

@ -67,7 +67,7 @@
<code class="computeroutput"><span class="identifier">boost</span><span class="special">::</span><span class="identifier">math</span><span class="special">::</span><span class="identifier">factorial</span><span class="special">(</span><span class="number">2</span><span class="special">);</span></code>
</p>
<p>
You will get a (perhaps perplexing) compiler error, ususally indicating
You will get a (perhaps perplexing) compiler error, usually indicating
that there is no such function to be found. Instead you need to specify
the return type explicity and write:
</p>

View File

@ -78,7 +78,7 @@
</pre>
<p>
Although other examples imbue individual streams with the new locale, for
the streams constructed inside lexical_cast, it was necesary to assign to
the streams constructed inside lexical_cast, it was necessary to assign to
a global locale.
</p>
<pre class="programlisting"><span class="identifier">locale</span><span class="special">::</span><span class="identifier">global</span><span class="special">(</span><span class="identifier">new_locale</span><span class="special">);</span>

View File

@ -931,7 +931,7 @@ by switching to use the Students t distribution (or Normal distribution
</li>
<li class="listitem">
Refactored test data and some special function code to improve support
for arbitary precision and/or expression-template-enabled types.
for arbitrary precision and/or expression-template-enabled types.
</li>
<li class="listitem">
Added new faster zeta function evaluation method.

View File

@ -931,7 +931,7 @@ by switching to use the Students t distribution (or Normal distribution
</li>
<li class="listitem">
Refactored test data and some special function code to improve support
for arbitary precision and/or expression-template-enabled types.
for arbitrary precision and/or expression-template-enabled types.
</li>
<li class="listitem">
Added new faster zeta function evaluation method.

View File

@ -442,7 +442,7 @@
<span class="identifier">result_type</span> <span class="keyword">operator</span><span class="special">()(</span><span class="identifier">T</span> <span class="identifier">val</span><span class="special">)</span>
<span class="special">{</span>
<span class="keyword">using</span> <span class="keyword">namespace</span> <span class="identifier">boost</span><span class="special">::</span><span class="identifier">math</span><span class="special">::</span><span class="identifier">tools</span><span class="special">;</span>
<span class="comment">// estimate the true value, using arbitary precision</span>
<span class="comment">// estimate the true value, using arbitrary precision</span>
<span class="comment">// arithmetic and NTL::RR:</span>
<span class="identifier">NTL</span><span class="special">::</span><span class="identifier">RR</span> <span class="identifier">rval</span><span class="special">(</span><span class="identifier">val</span><span class="special">);</span>
<span class="identifier">upper_incomplete_gamma_fract</span><span class="special">&lt;</span><span class="identifier">NTL</span><span class="special">::</span><span class="identifier">RR</span><span class="special">&gt;</span> <span class="identifier">f1</span><span class="special">(</span><span class="identifier">rval</span><span class="special">,</span> <span class="identifier">rval</span><span class="special">);</span>

View File

@ -28,7 +28,7 @@
Functions Overview</a>
</h3></div></div></div>
<p>
The exponential funtion is defined, for all objects for which this makes
The exponential function is defined, for all objects for which this makes
sense, as the power series
</p>
<div class="blockquote"><blockquote class="blockquote"><p>

View File

@ -28,10 +28,10 @@
</h2></div></div></div>
<p>
Predominantly this is a TODO list, or a list of possible future enhancements.
Items labled "High Priority" effect the proper functioning of the
component, and should be fixed as soon as possible. Items labled "Medium
Items labeled "High Priority" effect the proper functioning of the
component, and should be fixed as soon as possible. Items labeled "Medium
Priority" are desirable enhancements, often pertaining to the performance
of the component, but do not effect it's accuracy or functionality. Items labled
of the component, but do not effect it's accuracy or functionality. Items labeled
"Low Priority" should probably be investigated at some point. Such
classifications are obviously highly subjective.
</p>

View File

@ -1554,7 +1554,7 @@ if (f(w) / f'</span><span class="special">(</span><span class="identifier">w</sp
of built-in 64-bit double and float (and 80-bit <code class="computeroutput"><span class="keyword">long</span>
<span class="keyword">double</span></code>) types. Finally the functor is
called repeatedly to compute as many additional series terms as necessary to
achive the desired precision, set from <code class="computeroutput"><span class="identifier">get_epsilon</span></code>
achieve the desired precision, set from <code class="computeroutput"><span class="identifier">get_epsilon</span></code>
(or terminated by <code class="computeroutput"><span class="identifier">evaluation_error</span></code>
on reaching the set iteration limit <code class="computeroutput"><span class="identifier">max_series_iterations</span></code>).
</p>

View File

@ -117,7 +117,7 @@
</li>
</ul></div>
<p>
In light of the issues above, we recomend that:
In light of the issues above, we recommend that:
</p>
<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
<li class="listitem">

View File

@ -33,7 +33,7 @@
<p>
If you define the symbol BOOST_OCTONION_TEST_VERBOSE, you will get additional
output (<a href="../../octonion/output_more.txt" target="_top">verbose output</a>); this
will only be helpfull if you enable message output at the same time, of course
will only be helpful if you enable message output at the same time, of course
(by uncommenting the relevant line in the test or by adding --log_level=messages
to your command line,...). In that case, and if you are running interactively,
you may in addition define the symbol BOOST_INTERACTIVE_TEST_INPUT_ITERATOR

View File

@ -34,7 +34,7 @@
</p>
<p>
If you define the symbol TEST_VERBOSE, you will get additional output (<a href="../../quaternion/output_more.txt" target="_top">verbose output</a>); this will only
be helpfull if you enable message output at the same time, of course (by uncommenting
be helpful if you enable message output at the same time, of course (by uncommenting
the relevant line in the test or by adding <code class="literal">--log_level=messages</code>
to your command line,...). In that case, and if you are running interactively,
you may in addition define the symbol BOOST_INTERACTIVE_TEST_INPUT_ITERATOR

View File

@ -164,7 +164,7 @@
</p>
<p>
Nico Temme, Special Functions, An Introduction to the Classical Functions of
Mathematical Physics, Wiley, ISBN: 0471-11313-1 (1996) who also gave valueable
Mathematical Physics, Wiley, ISBN: 0471-11313-1 (1996) who also gave valuable
advice.
</p>
<p>

View File

@ -149,7 +149,7 @@
<p>
While this is a pretty good approximation already, judging by the shape of
the error function we can clearly do better. Before starting on the Remez method
propper, we have one more step to perform: locate all the extrema of the error
proper, we have one more step to perform: locate all the extrema of the error
function, and store these locations as our initial <span class="emphasis"><em>Chebyshev control
points</em></span>.
</p>

View File

@ -39,7 +39,7 @@
</p>
<pre class="programlisting">4xE(sqrt(1 - 28<sup>2</sup> / x<sup>2</sup>)) - 300 = 0</pre>
<p>
In each case the target accuracy was set using our "recomended"
In each case the target accuracy was set using our "recommended"
accuracy limits (or at least limits that make a good starting point - which
is likely to give close to full accuracy without resorting to unnecessary
iterations).

View File

@ -33,7 +33,7 @@
types, <code class="computeroutput"><span class="keyword">float</span></code>, <code class="computeroutput"><span class="keyword">double</span></code>, <code class="computeroutput"><span class="keyword">long</span>
<span class="keyword">double</span></code> and a <a href="../../../../../../libs/multiprecision/doc/html/index.html" target="_top">Boost.Multiprecision</a>
type <code class="computeroutput"><span class="identifier">cpp_bin_float_50</span></code>. In
each case the target accuracy was set using our "recomended" accuracy
each case the target accuracy was set using our "recommended" accuracy
limits (or at least limits that make a good starting point - which is likely
to give close to full accuracy without resorting to unnecessary iterations).
</p>

View File

@ -67,7 +67,7 @@
<span class="emphasis"><em>guess</em></span>.
</li>
<li class="listitem">
The value of the inital guess must have the same sign as the root: the
The value of the initial guess must have the same sign as the root: the
function will <span class="emphasis"><em>never cross the origin</em></span> when searching
for the root.
</li>

View File

@ -107,7 +107,7 @@
</p>
<p>
The Legendre-Stieltjes polynomials do not satisfy three-term recurrence relations
or have a particulary simple representation. Hence the constructor call determines
or have a particularly simple representation. Hence the constructor call determines
what, in fact, the polynomial is. Once the constructor comes back, the polynomial
can be evaluated via the Legendre series.
</p>

View File

@ -425,7 +425,7 @@
</h5>
<p>
Testing of multiprecision types is handled by the test drivers in libs/multiprecision/test/math,
please refer to these for examples. Note that these tests are run only occationally
please refer to these for examples. Note that these tests are run only occasionally
as they take a lot of CPU cycles to build and run.
</p>
<h5>

View File

@ -371,7 +371,7 @@
</pre>
<p>
In real life, there will usually be more than one event (fault or success),
when the negative binomial, which has the neccessary extra parameter, will
when the negative binomial, which has the necessary extra parameter, will
be needed.
</p>
<p>

View File

@ -120,7 +120,7 @@
<p>
Selling five candy bars means getting five successes, so successes r
= 5. The total number of trials (n, in this case, houses visited) this
takes is therefore = sucesses + failures or k + r = k + 5.
takes is therefore = successes + failures or k + r = k + 5.
</p>
<pre class="programlisting"><span class="keyword">double</span> <span class="identifier">sales_quota</span> <span class="special">=</span> <span class="number">5</span><span class="special">;</span> <span class="comment">// Pat's sales quota - successes (r).</span>
</pre>

View File

@ -40,7 +40,7 @@
<p>
We do, however provide several transcendentals, chief among which is the exponential.
This author claims the complete proof of the "closed formula" as
his own, as well as its independant invention (there are claims to prior invention
his own, as well as its independent invention (there are claims to prior invention
of the formula, such as one by Professor Shoemake, and it is possible that
the formula had been known a couple of centuries back, but in absence of bibliographical
reference, the matter is pending, awaiting further investigation; on the other

View File

@ -348,7 +348,7 @@ for each element in the tuple (in addition to the input parameters):
result_type operator()(T val)
{
using namespace boost::math::tools;
// estimate the true value, using arbitary precision
// estimate the true value, using arbitrary precision
// arithmetic and NTL::RR:
NTL::RR rval(val);
upper_incomplete_gamma_fract<NTL::RR> f1(rval, rval);

View File

@ -931,7 +931,7 @@ test program tests octonions specialisations for float, double and long double
If you define the symbol BOOST_OCTONION_TEST_VERBOSE, you will get additional
output ([@../octonion/output_more.txt verbose output]); this will
only be helpfull if you enable message output at the same time, of course
only be helpful if you enable message output at the same time, of course
(by uncommenting the relevant line in the test or by adding --log_level=messages
to your command line,...). In that case, and if you are running interactively,
you may in addition define the symbol BOOST_INTERACTIVE_TEST_INPUT_ITERATOR to

View File

@ -1,12 +1,12 @@
[section:issues Known Issues, and TODO List]
Predominantly this is a TODO list, or a list of possible
future enhancements. Items labled "High Priority" effect
future enhancements. Items labelled, labeled "High Priority" effect
the proper functioning of the component, and should be fixed
as soon as possible. Items labled "Medium Priority" are
as soon as possible. Items labeled "Medium Priority" are
desirable enhancements, often pertaining to the performance
of the component, but do not effect it's accuracy or functionality.
Items labled "Low Priority" should probably be investigated at
Items labeled "Low Priority" should probably be investigated at
some point. Such classifications are obviously highly subjective.
If you don't see a component listed here, then we don't have any known

View File

@ -333,7 +333,7 @@ So for example 128-bit rational approximations will work with UDT's and do the r
* Deprecated wrongly named `twothirds` math constant in favour of `two_thirds` (with underscore separator).
(issue [@https://svn.boost.org/trac/boost/ticket/6199 #6199]).
* Refactored test data and some special function code to improve support for arbitary precision and/or expression-template-enabled types.
* Refactored test data and some special function code to improve support for arbitrary precision and/or expression-template-enabled types.
* Added new faster zeta function evaluation method.
Fixed issues:

View File

@ -249,7 +249,7 @@ brackets.
Note that this routine can only be used when:
* ['f(x)] is monotonic in the half of the real axis containing ['guess].
* The value of the inital guess must have the same sign as the root: the function
* The value of the initial guess must have the same sign as the root: the function
will ['never cross the origin] when searching for the root.
* The location of the root should be known at least approximately,
if the location of the root differs by many orders of magnitude

View File

@ -238,7 +238,7 @@ where
g[sub k] and h[sub k]
are also computed by recursions (involving gamma functions), but the
formulas are a little complicated, readers are refered to
formulas are a little complicated, readers are referred to
N.M. Temme, ['On the numerical evaluation of the ordinary Bessel function
of the second kind], Journal of Computational Physics, vol 21, 343 (1976).
Note Temme's series converge only for |[mu]| <= 1/2.

View File

@ -236,7 +236,7 @@ Asymptotic Approximations for Symmetric Elliptic Integrals]],
SIAM Journal on Mathematical Analysis, Volume 25, Issue 2 (March 1994), 288-303.
The following references, while not directly relevent to our implementation,
The following references, while not directly relevant to our implementation,
may also be of interest:
# R. Burlisch, ['Numerical Compuation of Elliptic Integrals and Elliptic Functions.]

View File

@ -32,7 +32,7 @@ arguments passed to the function. Therefore if you write something like:
`boost::math::factorial(2);`
You will get a (perhaps perplexing) compiler error, ususally indicating that there is no such function to be found.
You will get a (perhaps perplexing) compiler error, usually indicating that there is no such function to be found.
Instead you need to specify the return type explicity and write:
`boost::math::factorial<double>(2);`
@ -144,7 +144,7 @@ arguments passed to the function. Therefore if you write something like:
`boost::math::double_factorial(2);`
You will get a (possibly perplexing) compiler error, ususally indicating that there is no such function to be found. Instead you need to specifiy
You will get a (possibly perplexing) compiler error, usually indicating that there is no such function to be found. Instead you need to specifiy
the return type explicity and write:
`boost::math::double_factorial<double>(2);`
@ -324,7 +324,7 @@ arguments passed to the function. Therefore if you write something like:
`boost::math::binomial_coefficient(10, 2);`
You will get a compiler error, ususally indicating that there is no such function to be found. Instead you need to specifiy
You will get a compiler error, usually indicating that there is no such function to be found. Instead you need to specifiy
the return type explicity and write:
`boost::math::binomial_coefficient<double>(10, 2);`

View File

@ -23,7 +23,7 @@
[section:inv_hyper_over Inverse Hyperbolic Functions Overview]
The exponential funtion is defined, for all objects for which this makes sense,
The exponential function is defined, for all objects for which this makes sense,
as the power series
[equation special_functions_blurb1]
with ['[^n! = 1x2x3x4x5...xn]] (and ['[^0! = 1]] by definition) being the factorial of ['[^n]].

View File

@ -599,7 +599,7 @@ For multiprecision types, first several terms of the series are tabulated and ev
Then our series functor is initialized "as if" it had already reached term 18,
enough evaluation of built-in 64-bit double and float (and 80-bit `long double`) types.
Finally the functor is called repeatedly to compute as many additional series terms
as necessary to achive the desired precision, set from `get_epsilon`
as necessary to achieve the desired precision, set from `get_epsilon`
(or terminated by `evaluation_error` on reaching the set iteration limit `max_series_iterations`).
A little more than one decimal digit of precision is gained by each additional series term.

View File

@ -54,7 +54,7 @@ where ['P[sub i]] are the Legendre polynomials.
The scaling follows [@http://www.ams.org/journals/mcom/1968-22-104/S0025-5718-68-99866-9/S0025-5718-68-99866-9.pdf Patterson],
who expanded the Legendre-Stieltjes polynomials in a Legendre series and took the coefficient of the highest-order Legendre polynomial in the series to be unity.
The Legendre-Stieltjes polynomials do not satisfy three-term recurrence relations or have a particulary simple representation.
The Legendre-Stieltjes polynomials do not satisfy three-term recurrence relations or have a particularly simple representation.
Hence the constructor call determines what, in fact, the polynomial is.
Once the constructor comes back, the polynomial can be evaluated via the Legendre series.

View File

@ -4,7 +4,7 @@
/* HSO4.hpp header file */
/* */
/* This file is not currently part of the Boost library. It is simply an example of the use */
/* quaternions can be put to. Hopefully it will be usefull too. */
/* quaternions can be put to. Hopefully it will be useful too. */
/* */
/* This file provides tools to convert between quaternions and R^4 rotation matrices. */
/* */

View File

@ -23,7 +23,7 @@ achieves a specific value.
int main()
{
// The lithium potential is given in Kohn's paper, Table I.
// (We could equally easily use an unordered_map, a list of tuples or pairs, or a 2-dimentional array).
// (We could equally easily use an unordered_map, a list of tuples or pairs, or a 2-dimensional array).
std::map<double, double> r;
r[0.02] = 5.727;

View File

@ -290,7 +290,7 @@ And if we require a high confidence, they widen to 0.00005 to 0.05.
cout << "geometric::find_upper_bound_on_p(" << int(k) << ", " << alpha/2 << ") = "
<< t << endl; // 0.052
/*`In real life, there will usually be more than one event (fault or success),
when the negative binomial, which has the neccessary extra parameter, will be needed.
when the negative binomial, which has the necessary extra parameter, will be needed.
*/
/*`As noted above, using a catch block is always a good idea,

View File

@ -94,7 +94,7 @@ helpful error message instead of an abrupt program abort.
/*`
Selling five candy bars means getting five successes, so successes r = 5.
The total number of trials (n, in this case, houses visited) this takes is therefore
= sucesses + failures or k + r = k + 5.
= successes + failures or k + r = k + 5.
*/
double sales_quota = 5; // Pat's sales quota - successes (r).
/*`

View File

@ -17,7 +17,7 @@
// (Bernoulli, independent, yes or no, succeed or fail)
// with success_fraction probability p,
// negative_binomial is the probability that k or fewer failures
// preceed the r th trial's success.
// precede the r th trial's success.
#include <iostream>
using std::cout;

View File

@ -12,14 +12,14 @@ namespace boost{ namespace math{ namespace lanczos{
//
// Lanczos Coefficients for N=13 G=13.144565
// Max experimental error (with arbitary precision arithmetic) 9.2213e-23
// Max experimental error (with arbitrary precision arithmetic) 9.2213e-23
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
typedef lanczos13 lanczos13UDT;
//
// Lanczos Coefficients for N=22 G=22.61891
// Max experimental error (with arbitary precision arithmetic) 2.9524e-38
// Max experimental error (with arbitrary precision arithmetic) 2.9524e-38
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos22UDT : public mpl::int_<120>
@ -213,7 +213,7 @@ struct lanczos22UDT : public mpl::int_<120>
};
//
// Lanczos Coefficients for N=31 G=32.08067
// Max experimental error (with arbitary precision arithmetic) 0.162e-52
// Max experimental error (with arbitrary precision arithmetic) 0.162e-52
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at May 9 2006
//
struct lanczos31UDT

View File

@ -248,7 +248,7 @@ namespace boost{ namespace math
constant_initializer2<T, N, & BOOST_JOIN(constant_, name)<T>::template get_from_compute<N> >::force_instantiate();\
return get_from_compute<N>(); \
}\
/* This one is for true arbitary precision, which may well vary at runtime: */ \
/* This one is for true arbitrary precision, which may well vary at runtime: */ \
static inline T get(const mpl::int_<0>&)\
{\
BOOST_MATH_PRECOMPUTE_IF_NOT_LOCAL(constant_, name)\

View File

@ -19,7 +19,7 @@
// (like others including the poisson, binomial & negative binomial)
// is strictly defined as a discrete function: only integral values of k are envisaged.
// However because of the method of calculation using a continuous gamma function,
// it is convenient to treat it as if a continous function,
// it is convenient to treat it as if a continuous function,
// and permit non-integral values of k.
// To enforce the strict mathematical model, users should use floor or ceil functions
// on k outside this function to ensure that k is integral.

View File

@ -71,7 +71,7 @@
// (like others including the poisson, negative binomial & Bernoulli)
// is strictly defined as a discrete function: only integral values of k are envisaged.
// However because of the method of calculation using a continuous gamma function,
// it is convenient to treat it as if a continous function,
// it is convenient to treat it as if a continuous function,
// and permit non-integral values of k.
// To enforce the strict mathematical model, users should use floor or ceil functions
// on k outside this function to ensure that k is integral.

View File

@ -50,7 +50,7 @@ RealType cdf_imp(const cauchy_distribution<RealType, Policy>& dist, const RealTy
//
// CDF = -atan(1/x) ; x < 0
//
// So the proceedure is to calculate the cdf for -fabs(x)
// So the procedure is to calculate the cdf for -fabs(x)
// using the above formula, and then subtract from 1 when required
// to get the result.
//

View File

@ -24,7 +24,7 @@
// of the distribution header, AFTER the distribution and its core
// property accessors have been defined: this is so that compilers
// that implement 2-phase lookup and early-type-checking of templates
// can find the definitions refered to herein.
// can find the definitions referred to herein.
//
#include <boost/type_traits/is_same.hpp>

View File

@ -150,7 +150,7 @@ unsigned hypergeometric_quantile_imp(T p, T q, unsigned r, unsigned n, unsigned
++x;
}
// By the time we get here, log_pdf may be fairly inaccurate due to
// roundoff errors, get a fresh PDF calculation before proceding:
// roundoff errors, get a fresh PDF calculation before proceeding:
diff = hypergeometric_pdf<T>(x, r, n, N, pol);
}
while(result < p)
@ -198,7 +198,7 @@ unsigned hypergeometric_quantile_imp(T p, T q, unsigned r, unsigned n, unsigned
--x;
}
// By the time we get here, log_pdf may be fairly inaccurate due to
// roundoff errors, get a fresh PDF calculation before proceding:
// roundoff errors, get a fresh PDF calculation before proceeding:
diff = hypergeometric_pdf<T>(x, r, n, N, pol);
}
while(result + diff / 2 < q)

View File

@ -24,7 +24,7 @@
// is strictly defined as a discrete function:
// only integral values of k are envisaged.
// However because the method of calculation uses a continuous gamma function,
// it is convenient to treat it as if a continous function,
// it is convenient to treat it as if a continuous function,
// and permit non-integral values of k.
// To enforce the strict mathematical model, users should use floor or ceil functions
// on k outside this function to ensure that k is integral.

View File

@ -278,7 +278,7 @@ class hyperexponential_distribution
PolicyT());
}
// Two arg constructor from 2 ranges, we SFINAE this out of existance if
// Two arg constructor from 2 ranges, we SFINAE this out of existence if
// either argument type is incrementable as in that case the type is
// probably an iterator:
public: template <typename ProbRangeT, typename RateRangeT>
@ -299,7 +299,7 @@ class hyperexponential_distribution
}
// Two arg constructor for a pair of iterators: we SFINAE this out of
// existance if neither argument types are incrementable.
// existence if neither argument types are incrementable.
// Note that we allow different argument types here to allow for
// construction from an array plus a pointer into that array.
public: template <typename RateIterT, typename RateIterT2>

View File

@ -25,7 +25,7 @@
// is strictly defined as a discrete function:
// only integral values of k are envisaged.
// However because the method of calculation uses a continuous gamma function,
// it is convenient to treat it as if a continous function,
// it is convenient to treat it as if a continuous function,
// and permit non-integral values of k.
// To enforce the strict mathematical model, users should use floor or ceil functions
// on k outside this function to ensure that k is integral.
@ -288,7 +288,7 @@ namespace boost
// (like others including the binomial, negative binomial & Bernoulli)
// is strictly defined as a discrete function: only integral values of k are envisaged.
// However because of the method of calculation using a continuous gamma function,
// it is convenient to treat it as if it is a continous function
// it is convenient to treat it as if it is a continuous function
// and permit non-integral values of k.
// To enforce the strict mathematical model, users should use floor or ceil functions
// outside this function to ensure that k is integral.
@ -337,7 +337,7 @@ namespace boost
// (like others including the binomial, negative binomial & Bernoulli)
// is strictly defined as a discrete function: only integral values of k are envisaged.
// However because of the method of calculation using a continuous gamma function,
// it is convenient to treat it as is it is a continous function
// it is convenient to treat it as is it is a continuous function
// and permit non-integral values of k.
// To enforce the strict mathematical model, users should use floor or ceil functions
// outside this function to ensure that k is integral.

View File

@ -1259,7 +1259,7 @@ namespace boost
// UNtemplated copy constructor
// (this is taken care of by the compiler itself)
// explicit copy constructors (precision-loosing converters)
// explicit copy constructors (precision-losing converters)
explicit octonion(octonion<double> const & a_recopier)
{
@ -1328,7 +1328,7 @@ namespace boost
*this = detail::octonion_type_converter<double, float>(a_recopier);
}
// explicit copy constructors (precision-loosing converters)
// explicit copy constructors (precision-losing converters)
explicit octonion(octonion<long double> const & a_recopier)
{

View File

@ -651,7 +651,7 @@ inline BOOST_CXX14_CONSTEXPR quaternion<T> operator / (const quaternion<T>& a, c
template<typename T> inline BOOST_CONSTEXPR bool operator != (quaternion<T> const & lhs, quaternion<T> const & rhs) { return !(lhs == rhs); }
// Note: we allow the following formats, whith a, b, c, and d reals
// Note: we allow the following formats, with a, b, c, and d reals
// a
// (a), (a,b), (a,b,c), (a,b,c,d)
// (a,(c)), (a,(c,d)), ((a)), ((a),c), ((a),(c)), ((a),(c,d)), ((a,b)), ((a,b),c), ((a,b),(c)), ((a,b),(c,d))

View File

@ -1377,7 +1377,7 @@ T ibeta_imp(T a, T b, T x, const Policy& pol, bool inv, bool normalised, T* p_de
{
if((tools::max_value<T>() * div < *p_derivative))
{
// overflow, return an arbitarily large value:
// overflow, return an arbitrarily large value:
*p_derivative = tools::max_value<T>() / 2;
}
else

View File

@ -300,7 +300,7 @@
for (auto j = bessel_cache.begin(); j != bessel_cache.end(); ++j)
*j *= ratio;
//
// Very occationally our normalisation fails because the normalisztion value
// Very occasionally our normalisation fails because the normalisztion value
// is sitting right on top of a root (or very close to it). When that happens
// best to calculate a fresh Bessel evaluation and normalise again.
//

View File

@ -148,7 +148,7 @@
{
//
// There's no easy relation between a, b and z that tells us whether we're in the region
// where forwards recursion is stable, so use a lookup table, note that the minumum
// where forwards recursion is stable, so use a lookup table, note that the minimum
// permissible z-value is decreasing with a, and increasing with |b|:
//
static const float data[][3] = {

View File

@ -456,7 +456,7 @@
// but that's not clear...
// Also need to add on a fudge factor to the cost to account for the fact that we need
// to calculate the Bessel functions, this is not quite as high as the gamma function
// method above as this is generally more accurate and so prefered if the methods are close:
// method above as this is generally more accurate and so preferred if the methods are close:
//
cost = 50 + fabs(b - a);
if((b > 1) && (cost <= current_cost) && (z < tools::log_max_value<T>()) && (z < 11356) && (b - a != 0.5f))

View File

@ -39,7 +39,7 @@
// The values obtained agree with those obtained by Didonato and Morris
// (at least to the first 30 digits that they provide).
// At double precision the degrees of polynomial required for full
// machine precision are close to those recomended to Didonato and Morris,
// machine precision are close to those recommended to Didonato and Morris,
// but of course many more terms are needed for larger types.
//
#ifndef BOOST_MATH_DETAIL_IGAMMA_LARGE
@ -475,7 +475,7 @@ T igamma_temme_large(T a, T x, const Policy& pol, mpl::int_<24> const *)
// And finally, a version for 113-bit mantissa's
// (128-bit long doubles, or 10^-34).
// Note this one has been optimised for a > 200
// It's use for a < 200 is not recomended, that would
// It's use for a < 200 is not recommended, that would
// require many more terms in the polynomials.
//
template <class T, class Policy>

View File

@ -118,7 +118,7 @@ T lgamma_small_imp(T z, T zm1, T zm2, const mpl::int_<64>&, const Policy& /* l *
else
{
//
// If z is less than 1 use recurrance to shift to
// If z is less than 1 use recurrence to shift to
// z in the interval [1,2]:
//
if(z < 1)
@ -316,7 +316,7 @@ T lgamma_small_imp(T z, T zm1, T zm2, const mpl::int_<113>&, const Policy& /* l
else
{
//
// If z is less than 1 use recurrance to shift to
// If z is less than 1 use recurrence to shift to
// z in the interval [1,2]:
//
if(z < 1)

View File

@ -284,7 +284,7 @@ namespace boost { namespace math { namespace detail{
// forms are related via the Chebeshev polynomials of the first kind and
// T_n(cos(x)) = cos(n x). The polynomial form has the great advantage that
// all the cosine terms are zero at half integer arguments - right where this
// function has it's minumum - thus avoiding cancellation error in this region.
// function has it's minimum - thus avoiding cancellation error in this region.
//
// And finally, since every other term in the polynomials is zero, we can save
// space by only storing the non-zero terms. This greatly complexifies

View File

@ -112,7 +112,7 @@ T inverse_students_t_tail_series(T df, T v, const Policy& pol)
* ((((((((((((945 * df) + 31506) * df + 425858) * df + 2980236) * df + 11266745) * df + 20675018) * df + 7747124) * df - 22574632) * df - 8565600) * df + 18108416) * df - 7099392) * df + 884736)
/ (46080 * np2 * np4 * np6 * (df + 8) * (df + 10) * (df +12));
//
// Now bring everthing together to provide the result,
// Now bring everything together to provide the result,
// this is Eq 62 of Shaw:
//
T rn = sqrt(df);

View File

@ -447,7 +447,7 @@ T digamma_imp(T x, const Tag* t, const Policy& pol)
result += 1/x;
}
//
// If x < 1 use recurrance to shift to > 1:
// If x < 1 use recurrence to shift to > 1:
//
while(x < 1)
{

View File

@ -28,7 +28,7 @@ inline typename tools::promote_args<T1, T2, T3>::type
namespace detail{
// Implement Hermite polynomials via recurrance:
// Implement Hermite polynomials via recurrence:
template <class T>
T hermite_imp(unsigned n, T x)
{

View File

@ -29,7 +29,7 @@ inline typename tools::promote_args<T1, T2, T3>::type
namespace detail{
// Implement Laguerre polynomials via recurrance:
// Implement Laguerre polynomials via recurrence:
template <class T>
T laguerre_imp(unsigned n, T x)
{

View File

@ -74,7 +74,7 @@ template <class Lanczos, class T>
typename lanczos_initializer<Lanczos, T>::init const lanczos_initializer<Lanczos, T>::initializer;
//
// Lanczos Coefficients for N=6 G=5.581
// Max experimental error (with arbitary precision arithmetic) 9.516e-12
// Max experimental error (with arbitrary precision arithmetic) 9.516e-12
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos6 : public mpl::int_<35>
@ -174,7 +174,7 @@ struct lanczos6 : public mpl::int_<35>
//
// Lanczos Coefficients for N=11 G=10.900511
// Max experimental error (with arbitary precision arithmetic) 2.16676e-19
// Max experimental error (with arbitrary precision arithmetic) 2.16676e-19
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos11 : public mpl::int_<60>
@ -304,7 +304,7 @@ struct lanczos11 : public mpl::int_<60>
//
// Lanczos Coefficients for N=13 G=13.144565
// Max experimental error (with arbitary precision arithmetic) 9.2213e-23
// Max experimental error (with arbitrary precision arithmetic) 9.2213e-23
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos13 : public mpl::int_<72>
@ -446,7 +446,7 @@ struct lanczos13 : public mpl::int_<72>
//
// Lanczos Coefficients for N=22 G=22.61891
// Max experimental error (with arbitary precision arithmetic) 2.9524e-38
// Max experimental error (with arbitrary precision arithmetic) 2.9524e-38
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos22 : public mpl::int_<120>
@ -642,7 +642,7 @@ struct lanczos22 : public mpl::int_<120>
//
// Lanczos Coefficients for N=6 G=1.428456135094165802001953125
// Max experimental error (with arbitary precision arithmetic) 8.111667e-8
// Max experimental error (with arbitrary precision arithmetic) 8.111667e-8
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos6m24 : public mpl::int_<24>
@ -737,7 +737,7 @@ struct lanczos6m24 : public mpl::int_<24>
//
// Lanczos Coefficients for N=13 G=6.024680040776729583740234375
// Max experimental error (with arbitary precision arithmetic) 1.196214e-17
// Max experimental error (with arbitrary precision arithmetic) 1.196214e-17
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos13m53 : public mpl::int_<53>
@ -874,7 +874,7 @@ struct lanczos13m53 : public mpl::int_<53>
//
// Lanczos Coefficients for N=17 G=12.2252227365970611572265625
// Max experimental error (with arbitary precision arithmetic) 2.7699e-26
// Max experimental error (with arbitrary precision arithmetic) 2.7699e-26
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos17m64 : public mpl::int_<64>
@ -1039,7 +1039,7 @@ struct lanczos17m64 : public mpl::int_<64>
//
// Lanczos Coefficients for N=24 G=20.3209821879863739013671875
// Max experimental error (with arbitary precision arithmetic) 1.0541e-38
// Max experimental error (with arbitrary precision arithmetic) 1.0541e-38
// Generated with compiler: Microsoft Visual C++ version 8.0 on Win32 at Mar 23 2006
//
struct lanczos24m113 : public mpl::int_<113>

View File

@ -32,7 +32,7 @@ inline typename tools::promote_args<T1, T2, T3>::type
namespace detail{
// Implement Legendre P and Q polynomials via recurrance:
// Implement Legendre P and Q polynomials via recurrence:
template <class T, class Policy>
T legendre_imp(unsigned l, T x, const Policy& pol, bool second = false)
{

View File

@ -475,7 +475,7 @@ T float_distance_imp(const T& a, const T& b, const mpl::true_&, const Policy& po
+ fabs(float_distance(static_cast<T>((a < 0) ? T(-detail::get_smallest_value<T>()) : detail::get_smallest_value<T>()), a, pol));
//
// By the time we get here, both a and b must have the same sign, we want
// b > a and both postive for the following logic:
// b > a and both positive for the following logic:
//
if(a < 0)
return float_distance(static_cast<T>(-b), static_cast<T>(-a), pol);
@ -583,7 +583,7 @@ T float_distance_imp(const T& a, const T& b, const mpl::false_&, const Policy& p
+ fabs(float_distance(static_cast<T>((a < 0) ? T(-detail::get_smallest_value<T>()) : detail::get_smallest_value<T>()), a, pol));
//
// By the time we get here, both a and b must have the same sign, we want
// b > a and both postive for the following logic:
// b > a and both positive for the following logic:
//
if(a < 0)
return float_distance(static_cast<T>(-b), static_cast<T>(-a), pol);

View File

@ -627,7 +627,7 @@ namespace boost
break;
}
abs_err += fabs(c * term);
if(sum < 0) // sum must always be positive, if it's negative something really bad has happend:
if(sum < 0) // sum must always be positive, if it's negative something really bad has happened:
policies::raise_evaluation_error(function, 0, T(0), pol);
return std::pair<T, T>((sum / d) / boost::math::constants::two_pi<T>(), abs_err / sum);
}

View File

@ -71,7 +71,7 @@ inline typename tools::promote_args<T>::type round(const T& v)
}
//
// The following functions will not compile unless T has an
// implicit convertion to the integer types. For user-defined
// implicit conversion to the integer types. For user-defined
// number types this will likely not be the case. In that case
// these functions should either be specialized for the UDT in
// question, or else overloads should be placed in the same

View File

@ -49,7 +49,7 @@ inline typename tools::promote_args<T>::type trunc(const T& v)
}
//
// The following functions will not compile unless T has an
// implicit convertion to the integer types. For user-defined
// implicit conversion to the integer types. For user-defined
// number types this will likely not be the case. In that case
// these functions should either be specialized for the UDT in
// question, or else overloads should be placed in the same

View File

@ -26,7 +26,7 @@ namespace boost{ namespace math{ namespace tools{
Real convert_from_string(const char* p, const mpl::false_&)
{
#ifdef BOOST_MATH_NO_LEXICAL_CAST
// This function should not compile, we don't have the necesary functionality to support it:
// This function should not compile, we don't have the necessary functionality to support it:
BOOST_STATIC_ASSERT(sizeof(Real) == 0);
#else
return boost::lexical_cast<Real>(p);

View File

@ -65,7 +65,7 @@ std::pair<T, T> brent_find_minima(F f, T min, T max, int bits, boost::uintmax_t&
q = fabs(q);
T td = delta2;
delta2 = delta;
// determine whether a parabolic step is acceptible or not:
// determine whether a parabolic step is acceptable or not:
if((fabs(p) >= fabs(q * td / 2)) || (p <= q * (min - x)) || (p >= q * (max - x)))
{
// nope, try golden section instead

View File

@ -167,7 +167,7 @@ namespace boost {
first *= scale;
*log_scaling += log_scale;
}
// scale each part seperately to avoid spurious overflow:
// scale each part separately to avoid spurious overflow:
third = (a / -c) * first + (b / -c) * second;
BOOST_ASSERT((boost::math::isfinite)(third));
@ -221,7 +221,7 @@ namespace boost {
first *= scale;
*log_scaling += log_scale;
}
// scale each part seperately to avoid spurious overflow:
// scale each part separately to avoid spurious overflow:
next = (b / -a) * second + (c / -a) * first;
BOOST_ASSERT((boost::math::isfinite)(next));

View File

@ -527,7 +527,7 @@ namespace detail {
T result = guess;
T factor = ldexp(static_cast<T>(1.0), 1 - digits);
T delta = (std::max)(T(10000000 * guess), T(10000000)); // arbitarily large delta
T delta = (std::max)(T(10000000 * guess), T(10000000)); // arbitrarily large delta
T last_f0 = 0;
T delta1 = delta;
T delta2 = delta;

View File

@ -503,7 +503,7 @@ std::pair<T, T> bracket_and_solve_root(F f, const T& guess, T factor, bool risin
BOOST_MATH_STD_USING
static const char* function = "boost::math::tools::bracket_and_solve_root<%1%>";
//
// Set up inital brackets:
// Set up initial brackets:
//
T a = guess;
T b = a;

View File

@ -169,19 +169,19 @@ void test_bump()
Real expected = bump(t);
Real computed = ct(t);
if(!CHECK_MOLLIFIED_CLOSE(expected, computed, 2*std::numeric_limits<Real>::epsilon())) {
std::cerr << " Problem occured at abscissa " << t << "\n";
std::cerr << " Problem occurred at abscissa " << t << "\n";
}
expected = bump_prime(t);
computed = ct.prime(t);
if(!CHECK_MOLLIFIED_CLOSE(expected, computed, 4000*std::numeric_limits<Real>::epsilon())) {
std::cerr << " Problem occured at abscissa " << t << "\n";
std::cerr << " Problem occurred at abscissa " << t << "\n";
}
expected = bump_double_prime(t);
computed = ct.double_prime(t);
if(!CHECK_MOLLIFIED_CLOSE(expected, computed, 4000*4000*std::numeric_limits<Real>::epsilon())) {
std::cerr << " Problem occured at abscissa " << t << "\n";
std::cerr << " Problem occurred at abscissa " << t << "\n";
}

View File

@ -578,7 +578,7 @@ BOOST_AUTO_TEST_CASE(exp_sinh_quadrature_test)
test_nr_examples<boost::multiprecision::cpp_dec_float_50>();
//
// This one causes stack overflows on the CI machine, but not locally,
// assume it's due to resticted resources on the server, and <shrug> for now...
// assume it's due to restricted resources on the server, and <shrug> for now...
//
#if ! BOOST_WORKAROUND(BOOST_MSVC, == 1900)
test_crc<boost::multiprecision::cpp_dec_float_50>();

View File

@ -29,7 +29,7 @@ using namespace boost::multiprecision;
typedef number<cpp_dec_float<50>, et_on> test_type;
// We get sporadic internal compiler errors from gcc-7.x when CI testing
// that don't appear to be reproducable locally. gcc-6.x and gcc-8.x are fine
// that don't appear to be reproducible locally. gcc-6.x and gcc-8.x are fine
// so for now it's a <shrug> and move on...
#if ! (defined(BOOST_GCC) && (__GNUC__ == 7))

View File

@ -50,7 +50,7 @@ void expected_results()
largest_type = "(long\\s+)?double";
#endif
//
// Linux special cases, error rates seem to be much higer here
// Linux special cases, error rates seem to be much higher here
// even though the implementation contains nothing but basic
// arithmetic?
//

View File

@ -19,7 +19,7 @@
This module tests the Laplace distribution.
Test 1: test_pdf_cdf_ocatave()
Compare pdf, cdf agains results obtained from GNU Octave.
Compare pdf, cdf against results obtained from GNU Octave.
Test 2: test_cdf_quantile_symmetry()
Checks if quantile is the inverse of cdf by testing

View File

@ -36,7 +36,7 @@ using (boost::math::isnan)(;
// Test nonfinite_num_put and nonfinite_num_get facets by checking
// loopback (output and re-input) of a few values,
// but using all the built-in char and floating-point types.
// Only the default output is used but various ostream options are tested seperately below.
// Only the default output is used but various ostream options are tested separately below.
// Finite, infinite and NaN values (positive and negative) are used for the test.
void trap_test_finite();

View File

@ -30,12 +30,12 @@ void test_trivial()
Real expected = 0;
if(!CHECK_MOLLIFIED_CLOSE(expected, ws.prime(0), 10*std::numeric_limits<Real>::epsilon())) {
std::cerr << " Problem occured at abscissa " << 0 << "\n";
std::cerr << " Problem occurred at abscissa " << 0 << "\n";
}
expected = -v_copy[0]/h;
if(!CHECK_MOLLIFIED_CLOSE(expected, ws.prime(h), 10*std::numeric_limits<Real>::epsilon())) {
std::cerr << " Problem occured at abscissa " << 0 << "\n";
std::cerr << " Problem occurred at abscissa " << 0 << "\n";
}
}
@ -94,13 +94,13 @@ void test_bump()
Real expected = v_copy[i];
Real computed = ws(t);
if(!CHECK_MOLLIFIED_CLOSE(expected, computed, 10*std::numeric_limits<Real>::epsilon())) {
std::cerr << " Problem occured at abscissa " << t << "\n";
std::cerr << " Problem occurred at abscissa " << t << "\n";
}
Real expected_prime = bump_prime(t);
Real computed_prime = ws.prime(t);
if(!CHECK_MOLLIFIED_CLOSE(expected_prime, computed_prime, 1000*std::numeric_limits<Real>::epsilon())) {
std::cerr << " Problem occured at abscissa " << t << "\n";
std::cerr << " Problem occurred at abscissa " << t << "\n";
}
}
@ -115,13 +115,13 @@ void test_bump()
Real expected = bump(t);
Real computed = ws(t);
if(!CHECK_MOLLIFIED_CLOSE(expected, computed, 10*std::numeric_limits<Real>::epsilon())) {
std::cerr << " Problem occured at abscissa " << t << "\n";
std::cerr << " Problem occurred at abscissa " << t << "\n";
}
Real expected_prime = bump_prime(t);
Real computed_prime = ws.prime(t);
if(!CHECK_MOLLIFIED_CLOSE(expected_prime, computed_prime, sqrt(std::numeric_limits<Real>::epsilon()))) {
std::cerr << " Problem occured at abscissa " << t << "\n";
std::cerr << " Problem occurred at abscissa " << t << "\n";
}
}
}

Some files were not shown because too many files have changed in this diff Show More