2

Can someone please explain the difference between a vector and an array processor, which one encounters when learning about the computer architecture involved in parallel programming?

One of the source which I referred tells that the vector processor is also called an array processor. It's a bit confusing. Thank you in advance!

user3666197
  • 1
  • 6
  • 50
  • 92
Aim
  • 389
  • 6
  • 20

2 Answers2

1

See "Supercomputer Languages" by R H Perrott and A Zarea-Aliabadi, ACM Computing Surveys, Vol 18, No 1, March 1986, pp. 7-8.
Some consider vector processors as SIMD computers and some don't. If vector processors are SIMD computers, they are pipelined SIMD, whereas array processors are simultaneous SIMD.

Ming
  • 11
  • 1
0

Q : Can someone…explain the difference between a vector and an array processor…?

Yes. There is none.

Arrays are but human abstractions. Common computers "see" vectors ( weakly organised sections of their physical address-space, considered as a contiguous "block"-space, where vector's cell-by-cell data reside and get stored into and fetched from ( this is expensive, as of 2020/Q1 some CPUs may enjoy a few SIMD-instructions' tricks to move at once data to/from memory in their hardware-SIMD-specific blocks ( vectors of a few data-cells, currently not more than AVX-512 512bit-blocks, so eight float64-s or sixteen float32-s, etc. at max ) ) ).

So the data-"representation" is code-based, not hardware-based. Arrays, tensors in a bit more generalised view, are being stored in linearly addressed memory just by convention - being either:

  • a F-ortran convention (row first, having the first index changing fastest ), or

  • a C-language convention (column first, having the last index changing fastest )

so all such one-after-another aligned vector-splices, as a weakly, code-controlled sequences of vectors ( each vector being but a weakly code-controlled contiguous linear sequence of data-cells in a directly addressable memory )

( not speaking here knowingly about sparse-{vector|array|tensor}-representation formats, where non-zero elements' addressing conventions ( for the sake of memory-saving ( at a cost of losses in speed of access and cache-lines' inefficiencies ) due to the nature of sparsity often using indirect addressing tricks ) may go pretty wild to touch 'em here in brief )


Known Exceptions :

If needed, SoC and FPGA devices may get designed so as to go an extra mile for indeed a hardware-supported {matrix|tensor}-processing, yet all that comes at extra costs - human invention + clever and energy-efficient silicon-level designs for such processors' matrix-uops having also extremely increased data-I/O on-die + off-die bandwidths, that are needed for a reasonably both large ( O(N2) sizes) and fast matrix-processing are rather very expensive, so do not expect any great magic from even these ultimately niche use-case approaches, where rather very limited, almost ASIC-alike product market sizes restrict such increased costs adjustments ( consumer electronics markets are not seeking for these, while MIL/R&D/FinTech/other special segments can pay for such extreme, non-COTS solutions ).

user3666197
  • 1
  • 6
  • 50
  • 92