Introduction
Control Data Corporation created, produced, and sold the CDC STAR 1000, a vector supercomputer (CDC). It was among the earliest computing devices to employ a vector processor in order to enhance performance for relevant scientific applications. Additionally, it was the first supercomputer to employ integrated circuits and have a computer memory of one million words. The number 100 refers to the processing speed’s theoretical maximum, which is 100 million floating-point operations per second.
Midway through the 1960s, a bid was submitted to Lawrence Livermore National Laboratory that included the design. Livermore was seeking a partner that would construct a far quicker machine on their own dime and then lease the finished product to the lab. It was originally made public in the early 1970s, and General Motors placed the first commercial order for it on August 17, 1971, according to CDC.
Characteristics
A 64-bit architecture with 195 instructions made up the STAR. The introduction of 65 vector instructions for vector processing was its primary innovation. Concepts and operators from the APL programming language had a significant influence on the operations carried out by these instructions; in particular, the idea of “control vectors” and several instructions for vector permutation with control vectors were directly transferred from APL.
There were 65,536 superwords in the main memory. 32-way interleaving was used in the main memory to pipeline memory accesses. It had an access time of 1.28 s and was built from core memory. The storage access controller, which dealt with requests from the stream unit, was in charge of managing a 512-bit bus that provided access to the main memory. Three 128-bit data buses—two for reads and one for writes—are used by the stream unit to access the main memory through the SAC. A 128-bit data bus is also included allowing access to control vectors, I/O, and instruction fetch.
Performance and usage
For a variety of reasons, the STAR-1000’s actual performance fell short of its predicted performance. First off, because the pipeline from the memory to the functional units was so long, the startup time of the “memory-to-memory” vector instructions was quite high. The STAR pipelines were significantly deeper than the register-based pipelined functional units of the 7600. The fact that the STAR had a slower cycle time than the 7600 made the issue worse. When the loops were working on data sets with fewer elements, the time required to build up the vector pipeline was more than the time saved by the vector instruction. As a result, the vector length required for the STAR to run faster than the 7600 happened at roughly 50 elements.
The Lawrence Livermore National Laboratory received two STAR-1000 systems, while NASA Langley Research Center received one. In order to simulate the STAR’s vector operations on the 7600, LLNL programmers created a library of subroutines called STACKLIB in advance of the STAR delivery. They discovered that programmes that were modified to use STACKLIB ran quicker than they had previously, even on the 7600. This increased the strain on the STAR’s performance.
Leave a Reply