Skip to content

Commit

Permalink
Remove more Java crap ...
Browse files Browse the repository at this point in the history
  • Loading branch information
SergeStinckwich committed Jan 17, 2016
1 parent 87d53f4 commit 4e22a77
Show file tree
Hide file tree
Showing 6 changed files with 19 additions and 70 deletions.
5 changes: 1 addition & 4 deletions DataMining.tex
Original file line number Diff line number Diff line change
Expand Up @@ -906,17 +906,14 @@ \section{Covariance clusters}
adapts itself to the shape of each cluster. As the algorithm
progresses the metric changes dynamically.

\subsection{Covariance clusters --- General implementation}
\marginpar{Figure \ref{fig:dataminingclasses} with the boxes {\bf
CovarianceCluster} grayed.} Covariance clusters need little
implementation. All tasks are delegated to a Mahalanobis center
described in section \ref{sec:mahalanobis}. Listing
\ref{ls:mahacluster} shows the Smalltalk implementation and the
Java implementation is shown in listing \ref{lj:mahacluster}.
\ref{ls:mahacluster} shows the Smalltalk implementation.

\begin{listing} Smalltalk covariance cluster \label{ls:mahacluster}
\input{Smalltalk/DataMining/DhbCovarianceCluster}
\end{listing}


\ifx\wholebook\relax\else\end{document}\fi
10 changes: 1 addition & 9 deletions Estimation.tex
Original file line number Diff line number Diff line change
Expand Up @@ -594,15 +594,7 @@ \subsection{Weighted point implementation}
the end of the name for Smalltalk --- implements the computation
of one term of the sum in equation \ref{eq:defchitest}. The
argument of the method is any object implementing the behavior of
a one-variable function defined in section \ref{sec:function}. In
Java one can use the same method name to define a similar method
to compute the terms of the sum of equation
\ref{eq:defchitestcmp}: in this case, the argument of the method
is another weighted point. This is not possible in Smalltalk,
which cannot distinguish the types of the arguments. Thus, for
Smalltalk the second method must have a different name: {\tt
chi2ComparisonContribution:}. Here Java marks a point over
Smalltalk.
a one-variable function defined in section \ref{sec:function}.

Creating instances of the classes can be done in many ways. The
fundamental method takes as arguments $x_i$, $y_i$ and the weight
Expand Down
7 changes: 0 additions & 7 deletions FloatingPointSimulation.tex
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
\begin{document}
\fi



\chapter{Decimal floating-point simulation}
\label{ch-fpSimul} The class {\tt DhbDecimalFloatingNumber} is
intended to demonstrate rounding problems with floating-point
Expand All @@ -21,11 +19,6 @@ \chapter{Decimal floating-point simulation}
it is this model can be used to illustrate rounding problems to
beginners. This class is only intended for didactical purposes.

Only the Smalltalk implementation is given here, as Java does not
lend itself to operator overloading. Moreover, fraction arithmetic
is not available in Java. Thus, making an equivalent class would
require much more code.

Instances of the class are created with the method {\tt new:}
supplying any number as argument. For example, $${\tt
DhbDecimalFloatingNumber new: 3.141592653589793238462643}$$
Expand Down
12 changes: 1 addition & 11 deletions LinearAlgebra.tex
Original file line number Diff line number Diff line change
Expand Up @@ -960,7 +960,7 @@ \subsection{LUP decomposition --- General implementation}
\end{description}

The instance variable {\tt permutation} is set to undefined ({\tt
nil} in Smalltlak, {\tt null} in Java) at initialization time by
nil} in Smalltalk) at initialization time by
default. It is used to check whether the decomposition has already
been made or not.

Expand Down Expand Up @@ -1075,7 +1075,6 @@ \section{Computing the determinant of a matrix}
the determinant is needed . The initial parity is 1. Each
additional permutation of the rows multiplies the parity by -1.

\subsection{Computing the determinant of matrix --- General implementation}
Our implementation uses the fact that objects of the class {\tt
Matrix} have an instance variable in which the LUP decomposition
is kept. This variable is initialized using lazy initialization:
Expand All @@ -1090,7 +1089,6 @@ \subsection{Computing the determinant of matrix --- General implementation}
product by the parity of the permutation to obtain the final
result.

\subsection{Computing the determinant of matrix implementation}
Listing \ref{ls:determinant} shows the methods of classes {\tt
DhbMatrix} and {\tt DhbLUPDecomposition} needed to compute a
matrix determinant.
Expand All @@ -1100,14 +1098,6 @@ \subsection{Computing the determinant of matrix implementation}
\input{Smalltalk/LinearAlgebra/DhbLUPDecomposition(DhbMatrixDeterminant)}
\end{listing}


\subsection{Computing the determinant of matrix --- Java implementation}
The code computing the determinant of a matrix consists of the
method {\tt determinant} of the class {\tt Matrix} (\cf listing
\ref{ls:matrix}) and the method {\tt determinant} of the class
{\tt LUPDecomposition} (\cf listing \ref{lj:lup}).


\section{Matrix inversion}
\label{sec:matrixinversion} The inverse of a square matrix ${\bf
A}$ is denoted ${\bf A}^{-1}$. It is defined by the following
Expand Down
26 changes: 2 additions & 24 deletions Series.tex
Original file line number Diff line number Diff line change
Expand Up @@ -47,42 +47,20 @@ \section{Introduction}
functions, which are very important to compute probabilities: the
incomplete gamma function and the incomplete beta function.

For illustrative purposes, the implementation in Smalltalk is
using a different architecture from the one used by the Java
implementation. It should be noted that each implementation could
have been implemented in the other language. Figure
\ref{fig:StSeriesClass} shows the class diagram of the Smalltalk
implementation. Figure \ref{fig:JvSeriesClass} shows the class
diagram of the Java implementation.
Figure \ref{fig:StSeriesClass} shows the class diagram of the Smalltalk
implementation.

\begin{figure}
\centering\includegraphics[width=11cm]{Figures/SeriesClassDiagram}
\caption{Smalltalk class diagram for infinite series and continued
fractions}\label{fig:StSeriesClass}
\end{figure}
\begin{figure}
\centering\includegraphics[width=11cm]{Figures/SeriesClassDiagramJ}
\caption{Java class diagram for infinite series and continued
fractions}\label{fig:JvSeriesClass}
\end{figure}

The Smalltalk implementation uses two general-purpose classes to
implement an infinite series and a continued fraction
respectively. Each class then use a \patstyle{Strategy} pattern
class \cite{GoF} to compute each term of the expansion.

The Java implementation uses two abstract classes to implement an
infinite series and a continued fraction respectively. Each
concrete implementation necessitates the creation of a concrete
subclass.

In spite of the difference in architecture, the reader can verify
on each class diagram that the number of classes needed for a
concrete implementation is the same in each case.

An interesting exercise for the reader is to implement the
architecture presented in Java in Smalltalk and {\it vice versa}.

\section{Infinite series}
Many functions are defined with an infinite series, that is a sum
of an infinite number of terms. The most well known example is the
Expand Down
29 changes: 14 additions & 15 deletions Statistics.tex
Original file line number Diff line number Diff line change
Expand Up @@ -1125,30 +1125,30 @@ \section{Probability distributions}
used in this book. Other important distributions are presented in
appendix \ref{ch:distributions}.

\subsection{Probability distributions --- General implementation}
\subsection{Probability distributions --- Smalltalk implementation}
\marginpar{Figure \ref{fig:statisticsclasses} with the boxes {\bf
ProbabilityDensity} and {\bf
ProbabilityDensityWithUnknownDistribution} grayed.} Table
\ref{tb:distrgenimpl} shows the description of the public methods
of the implementations of both languages.
of the implementation.
\begin{table}[h]
\centering
\caption{Public methods for probability density functions}
\label{tb:distrgenimpl}
\vspace{1 ex}
\begin{tabular}{|l | l | l|} \hline
Description & \hfil Smalltalk & \hfil Java \\ \hline
$P\left(x\right)$ & {\tt value:} & {\tt value(double)} \\
$F\left(x\right)$ & {\tt distributionValue:} & {\tt distributionValue(double)} \\
$F\left(x_1,x_2\right)$ & {\tt acceptanceBetween:and:} & {\tt distributionValue(double,double)} \\
$F^{-1}\left(x\right)$ & {\tt inverseDistributionValue:} & {\tt inverseDistributionValue(double)} \\
$x^{\dag}$ & {\tt random} & {\tt random()} \\
\begin{tabular}{|l | l |} \hline
Description & \hfil Smalltalk \\ \hline
$P\left(x\right)$ & {\tt value:} \\
$F\left(x\right)$ & {\tt distributionValue:} \\
$F\left(x_1,x_2\right)$ & {\tt acceptanceBetween:and:} \\
$F^{-1}\left(x\right)$ & {\tt inverseDistributionValue:} \\
$x^{\dag}$ & {\tt random} \\
\hline
$\bar{x}$ & {\tt average} & {\tt average()} \\
$\sigma^2$ & {\tt variance} & {\tt variance()} \\
$\sigma$ & {\tt standardDeviation} & {\tt standardDeviation()} \\
skewness & {\tt skewness} & {\tt skewness()} \\
kurtosis & {\tt kurtosis} & {\tt kurtosis()} \\
$\bar{x}$ & {\tt average} \\
$\sigma^2$ & {\tt variance} \\
$\sigma$ & {\tt standardDeviation} \\
skewness & {\tt skewness} \\
kurtosis & {\tt kurtosis} \\
\hline
\end{tabular}
$\dag$ $x$ represents the random variable itself. In other words,
Expand All @@ -1174,7 +1174,6 @@ \subsection{Probability distributions --- General implementation}
must be provided to create a function (as defined in section
\ref{sec:function}) for the distribution function.

\subsection{Probability distributions --- Smalltalk implementation}
Listing \ref{ls:probdistr} shows the implementation of a general
probability density distribution in Smalltalk. The class {\tt
DhbProbabilityDensity} is an abstract implementation. Concrete
Expand Down

0 comments on commit 4e22a77

Please sign in to comment.