Questions tagged [blas]

The Basic Linear Algebra Subprograms are a standard set of interfaces for low-level vector and matrix operations commonly used in scientific computing.

A reference implementation is available at NetLib; optimized implementations are also available for all high-performance computing architectures, for example:

The BLAS routines are divided into three levels:

  • Level 1: vector operations e.g. vector addition, dot product
  • Level 2: matrix-vector operations e.g. matrix-vector multiplication
  • Level 3: matrix-matrix operations e.g. matrix multiplication
906 questions
0
votes
1 answer

Adapt dgemm example code to use sgemm (scalapack)

I need to make the following program (from http://www.netlib.org/scalapack/examples/pblas.tgz) work with SGEMM. What do I need to change to make it work? My knowledge of Fortran is quite limited, I'm pretty much treating this as a black-box and…
pldimitrov
  • 1,597
  • 2
  • 16
  • 21
0
votes
0 answers

Armadillo Library : pinv() crashes the executable

I'm trying to use Armadillo for linear algebra calculations on rather large matrices (1500*1125) (representing images, like in Matlab). I've written a code to use Armadillo to calculate the pseudo inverse, but the executable crashes immediately upon…
0
votes
1 answer

Hopefully Quick Parallel Optimal Lapack Routine (gfortran) Questions

I thought I had a very clear understanding of this until two days ago, but now I might be over thinking it and confusing myself. I'll explain what I'm doing and then ask a couple of probably simplistic questions, but I've searched and found…
0
votes
1 answer

Writing a configurable scalapack linear system solver that prints execution time

I'm trying to adapt the following example program to use as a coarse-grained parallel benchmark in my experiments. I added the following lines to the code: START_TIME = MPI_WTIME() * <- added this CALL PDGESV( N, NRHS, MEM( IPA ), 1, 1,…
pldimitrov
  • 1,597
  • 2
  • 16
  • 21
0
votes
1 answer

Warm starting symmetric eigenvalue computation?

Do any standard (LAPACK / ARPACK / etc) implementations of the symmetric eigenvalue problem allow "warm starting"? That is, can they be accelerated if I already have a pretty good guess for the eigenvalues and eigenvectors of my matrix. With…
Robert T. McGibbon
  • 5,075
  • 3
  • 37
  • 45
0
votes
1 answer

Problems compiling example scalapack application

When I do: mpif77 example1.f -L scalapack/scalapack-1.8.0/ -lscalapack -L scalapack/blacs/BLACS/LIB -l:blacsF77init_MPI-LINUX-0.a -l:blacs_MPI-LINUX-0.a -l:blacsF77init_MPI-LINUX-0.a -L scalapack/blas/BLAS/ -l:blas_LINUX.a -L scalapack/lapack/…
pldimitrov
  • 1,597
  • 2
  • 16
  • 21
0
votes
1 answer

Is there a configuration under which a numpy operation will work on more than a single core/thread?

if so which? The specific example I'm interested in is np.einsum. I'm really confused with what OPENBLAS / BLAS / LAPACK / ATLAS / INTEL MKL offers. I've tried reading about this and installing packages but have made a mess, so I've decided to go…
evan54
  • 3,585
  • 5
  • 34
  • 61
0
votes
0 answers

cuBLAS - Issue with cublasSdot and cublasSgemv not taking pointers to GPU memory

I'm playing around with cuBlas, trying to get a dot product and a matrix-vector product to work. While doing so, I've come across a problem. First of, the code: float result_1; cublasSdot_v2(handle, c_nrSV[0] + 1, d_results[0], 1, d_ones, 1,…
spurra
  • 1,007
  • 2
  • 13
  • 38
0
votes
1 answer

RealMatrix multiply without reassign

in my Java source i must execute following lines very often: vecX = EigenMat.multiply(vecX); vecY = EigenMat.multiply(vecY); EigenMat is a N x N Matrix with N~40 vecX/vecY is a N x 1 vector (intern a RealMatrix to) I used the "Sampler" from…
Matyro
  • 151
  • 1
  • 9
0
votes
0 answers

Segmentation fault when using MKL CBLAS

I will use cblas in a C++ project, so I am trying to learn it. I wrote a simple code, but it is giving segmentation fault. My code is #include #include #include #include #include #include…
Dundun
  • 1
0
votes
1 answer

Can't disable Armadillo Wrapper During Compilation/Linking

I am trying to compile the Armadillo C++ Library under Windows 32 using MinGW32 and OpenBLAS. I've tried every tutorial and stackoverflow.com question on the topic, but still can't seem to disable the compilation of the wrapper.obj which produces…
lsdavies
  • 317
  • 2
  • 13
0
votes
1 answer

Armadillo lapack and blas undefined refences

I am struggling to build an armadillo example with the blas and lapack libraries. This is my build log: 19:34:02 **** Rebuild of configuration Debug for project Amatest2 **** Info: Internal Builder is used for build g++…
geogerber
  • 1
  • 2
0
votes
1 answer

Segmentation Fault while Calling Lapack function from Eigen

My program is written in C++ and I use Eigen library for the matrix operations inside it. There is a huge matrix product inside it, and the dimension is 50000*1000 and 1000*50000. So I tried to call the BLAS function from MKL library to improve the…
Jason
  • 1,200
  • 1
  • 10
  • 25
0
votes
1 answer

Sparse BLAS matrix-row vector product overwriting loop index

I am using the NIST Sparse BLAS v0.5 matrix-matrix multiplication routine, downloaded from http://math.nist.gov/~KRemington/fspblas/, to multiply a matrix one row at a time by a column vector. After calling the routine at a particular point (changes…
alex_d
  • 111
  • 5
0
votes
1 answer

openblas R 3.1 and Fedora / Centos dist

A while back I installed OpenBlas on my Centos server and R 3.02 with great success (over 50% improvement on the R benchmark-25). I followed the method described in the official R Cran documentation…
Enzo
  • 2,543
  • 1
  • 25
  • 38