Questions tagged [supercomputers]

Supercomputers belong to a class of highly specialised hardware infrastructures, where high number of machines are typically pre-organised and smart-linked together with specialised high-speed low-latency interconnects, so as to allow new forms of concurrent processing cooperations to be orchestrated. Having any such supercomputing infrastructure is not enough, it is important to also use system tools capable to harness the most of the available CPU-powers

Supercomputers first began to appear in the 1960's.

These early supercomputers had only a single, high-speed processor. Control Data Corporation's CDC-6600, designed by Seymour Cray, was about ten times faster than all other computers of its day, and was dubbed a supercomputer -- the first appearance of the term.

Later, as processing speed, cooling ability, and physical size hit limits, Cray pioneered the method of linking multiple processors together in order to get more speed out of the same machine. This is the same method used in today's supercomputers, which can range in size from thousands of processing cores to hundreds of thousands of processing cores.

*  Seymour CRAY (                           yes, the supercomputer guy )
*  said:
*  --------------------------------------------------------------------
*  A supercomputer turns compute-bound problems into I/O bound problems
*  --------------------------------------------------------------------
*  and:
*  --------------------------------------------------------------------
*  It is not hard to build a fast processor or a fast memory,
*  but the challenge is to build a fast system.
*  --------------------------------------------------------------------

Interconnect latency is an additional [TIME]-domain penalty, each process has to pay for using a supercomputer's remote resource under a distributed computation-graph schedule.

Minimising interconnect's latency-costs is thus one natural direction, using a smarter, overhead-aware computation-graph design is the other direction to achieve the indeed I/O-bounds' bleeding edge of the ultimate performance from any supercomputing system's infrastructure.

enter image description here

91 questions
1
vote
0 answers

SLURM squeue results - explanation of how users uses nodes

As a noob, I have access to supercomp with SLURM. The squeue command gives the list of used nodes for various jobs of different users. A small example is given below. Why do some users e.g. user1 (that's actually me) have one line (see below), while…
multipole
  • 111
  • 3
1
vote
1 answer

Supercomputing: smaller number of nodes and more cpus/node vs. larger number of nodes and less cpus per node

On a supercomputer, you have a set of nodes, and for each nodes you have some amount of CPUs. Is it generally better if to use, say, 20 CPUS for 1 node, as opposed to 2 nodes with 10 CPUs each? In both cases, there are 20 CPUs total. Is the…
24n8
  • 1,898
  • 1
  • 12
  • 25
1
vote
0 answers

Google Quantum supremacy - how are the 10k years assumed?

Last week, Google published a paper called Quantum supremacy using a programmable superconducting processor, which brags about: Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our…
gsamaras
  • 71,951
  • 46
  • 188
  • 305
1
vote
0 answers

Slurm error: "slurmstepd: error: no task list created!"

I'm attempting to run a simple job on Slurm but am getting a cryptic error message: slurmstepd: error: no task list created! I've run thousands of other jobs identical to the job I'm running here (with different parameters), but only this one run…
duhaime
  • 25,611
  • 17
  • 169
  • 224
1
vote
1 answer

SLURM Embarrasingly parrallel submission taking too many resources

so I have the following submission script: #!/bin/bash # #SBATCH --job-name=P6 #SBATCH --output=P6.txt #SBATCH --partition=workq #SBATCH --ntasks=512 #SBATCH --time=18:00:00 #SBATCH --mem-per-cpu=2500 #SBATCH --cpus-per-task=1 #SBATCH…
1
vote
2 answers

get available memory under SLURM using C++

I'm working in HPC environment and I'm using SLURM to submit my job to the queue. I'm writing my own memory caching mechanism and hence I want to know how much memory is available per node so that I can expand or reuse space. Is there a way to know…
Anurag Peshne
  • 1,547
  • 12
  • 29
1
vote
1 answer

Golem Task Settings : How can we configure diverse workloads and tasks

I have setup Golem Factory platform on my Mac Machine (MacOS 10.13.2). I could successfully setup the Golem Node. It is up and running. My Golem Wallet is showing a balance of 1000 GNT. Now I am trying to add the tasks in the Golem. It is only…
Gokul Alex
  • 441
  • 4
  • 21
1
vote
0 answers

Draw plot on a supercomputer using ipython

I want to plot a figure using python on a supercomputer. For example, I wrote a script plot.py: import numpy as np import matplotlib.pyplot as plt .... .... plt.plot(m) # m is a matrix with size (1000,36) plt.show() If I do: python3…
SYuan
  • 161
  • 1
  • 6
1
vote
2 answers

Why Torque qsub don't create output file?

I trying start task on cluster via Torque PBS with command qsub -o a.txt a.sh File a.sh contain single string: hostname After command qsub I make qstat command, that give next output: Job ID Name User Time…
r1d1
  • 469
  • 4
  • 16
1
vote
1 answer

Supercomputer: Dead simple example of a program to run in supercomputer

I am learning how to use supercomputers to make the good use of resources. Let's say I have a python script, that will create a text file with given random number. myfile.py # Imports import random,os outdir = 'outputs' if not…
BhishanPoudel
  • 15,974
  • 21
  • 108
  • 169
1
vote
2 answers

Basic guidelines for high-performance benchmarking

I am going to benchmark several implementations of a numerical simulation software on a high-performance computer, mainly with regard to time - but other resources like memory usage, inter-process communication etc. could be interesting as well. As…
shuhalo
  • 5,732
  • 12
  • 43
  • 60
1
vote
0 answers

Running a Spark application on a SuperComputer

I have some question about YARN? How can i run my jar file on YARN ? Exception in thread "main" java.lang.Exception: When running with master 'yarn-cluster' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment. Should i copy…
AHAD
  • 239
  • 4
  • 16
1
vote
3 answers

Can the announced Tegra K1 be a contender against x86 and x64 chips in supercomputing applications?

To clarify, can this RISC base processor (the Tegra K1) be used without significant changes to today's supercomputer programs, and perhaps be a game changer because if it's power, size, cost, and energy usage? I know it's going up against some x64…
Eric Martin
  • 317
  • 2
  • 13
1
vote
1 answer

Naive parallelization in a .pbs file

Is it possible to do parallelize across a for loop in a PBS file? Below is an my attempt.pbs file. I would like to allocate 4 nodes and simultaneously allocate 16 processes per node. I have successfully done this but now I have 4 jobs and I would…
1
vote
3 answers

Password hashing algorithm that will keep password safe even from supercomputers?

I was researching about how MD5 is known to have collisions, So its not secure enough. I am looking for some hashing algorithm that even super computers will take time to break.So can you tell me what hashing algorithm will keep my passwords safe…
s.d
  • 255
  • 1
  • 14