If you are really impatient, I’d suggest you read the Classes Overview below and otherwise stick to the API Documentation for the classes like DoubleMatrix.
The main goals of jblas were to provide very high performance, close to what you get from state-of-the-art BLAS and LAPACK libraries, and easy of use, which means that in the ideal case, you can just mechanically translate a matrix expression from formulas to Java code.
In all brevity, here is what you need to know to get started:
org.jblas
which represent real and complex matrices in single and
double precision.
ones
(constructs a matrix of all ones),
zeros
, rand
(entries uniformly distributed between 0 and 1),
randn
(entries normally distributed), eye
(unit matrix), diag
(matrix with given diagonal). Dimensions are specified in the order
“row”, “column”. The number of columns defaults to 1 if omitted
(meaning that you construct a row vector, if you supply just one
dimension).
put
and get
. Methods also exist for
reading or writing a whole column, row, or submatrix.
add
, – becomes sub
, * becomes mul
, / becomes
div
,
and so on.
double
or float
value, or a matrix with only
one element as the argument to a method, for example, to add the
same value to all elements of the matrix.
mul
is element-wise multiplication. Matrix-matrix multiplication
is called mmul
.
addi
is like +=
.
What is missing right now:
double
and float
arrays to store the matrix. Whenever you
call a native function, the array is first copied. This means that
it doesn’t make much sense to call a native routine if its
computation is linear in the size of the data, but this includes
most of BLAS Level 1 and Level 2. jblas therefore uses the Java
implementation for things like vector addition, or even
matrix-vector multiplication and is therefore not as fast as native
BLAS. Currently, I’m contemplating some caching schemes to improve
performance here.