Supercomputer

Source: Wikipedia: Supercomputer


Supercomputer

From Wikipedia, the free encyclopedia
Jump to:navigation, search
For other uses, see Supercomputer (disambiguation).
The Columbia Supercomputer, located at the NASA Ames Research Center.

A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash"

Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of May 2010[update], the Cray Jaguar is the fastest supercomputer in the world.

The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.

Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Contents
[hide]

* 1 Hardware and software design
o 1.1 Supercomputer challenges, technologies
o 1.2 Processing techniques
o 1.3 Operating systems
o 1.4 Programming
o 1.5 Software tools
* 2 Modern supercomputer architecture
* 3 Special-purpose supercomputers
* 4 The fastest supercomputers today
o 4.1 Measuring supercomputer speed
o 4.2 The TOP500 list
o 4.3 Current fastest supercomputer system
o 4.4 Quasi-supercomputing
* 5 Research and development
* 6 Timeline of supercomputers
* 7 See also
* 8 Notes
* 9 External links

[edit] Hardware and software design
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
Processor board of a CRAY YMP vector computer

Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.
[edit] Supercomputer challenges, technologies

* A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
* Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many metres across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1–5 microseconds to send a message between CPUs are typical.
* Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

Technologies developed for supercomputers include:

* Vector processing
* Liquid cooling
* Non-Uniform Memory Access (NUMA)
* Striped disks (the first instance of what was later called RAID)
* Parallel filesystems

[edit] Processing techniques

Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers.

Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
[edit] Operating systems
Supercomputers predominantly run some variant of Linux.[1]

Supercomputers today most often use variants of Linux[1].

Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer system's such as Cray's Unicos, or Linux.
[edit] Programming

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes.
[edit] Software tools

Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf, WareWulf, and openMosix, which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology.
[edit] Modern supercomputer architecture
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
IBM Roadrunner - LANL
The CPU Architecture Share of Top500 Rankings between 1993 and 2009.

Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:

* A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of an Operating System (OS).
* A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, wherein the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric multiprocessing (SMP) and Non-Uniform Memory Access (NUMA).
* A SIMD processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purpose vector processor. It could also be high-performance processor or a low power processor. As of 2007, the processor executes several SIMD instructions per nanosecond.

As of November 2009 the fastest supercomputer in the world is the Cray XT5 Jaguar system at National Center for Computational Sciences with more than 19000 computers and 224,000 processing elements, based on standard AMD processors. The fastest heterogeneous machine is IBM Roadrunner. This machine is a cluster of 3240 computers, each with 40 processing cores and includes both AMD and Cell processors. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.

In February 2009, IBM also announced work on "Sequoia," which appears to be a 20 petaflops supercomputer. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100,000 laptops). It is slated for deployment in late 2011. [2]

Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a ten-year-old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4,000 US dollars. Supercomputing is taking a step of increasing density, allowing for desktop supercomputers to become available, offering the computer power that in 1998 required a large room to require less than a desktop footprint.

In addition, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, in particular, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design, which can be programmed to act as one large computer.
[edit] Special-purpose supercomputers
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)

Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.

Examples of special-purpose supercomputers:

* Belle, Deep Blue, and Hydra, for playing chess
* Reconfigurable computing machines or parts of machines
* GRAPE, for astrophysics and molecular dynamics
* Deep Crack, for breaking the DES cipher
* MDGRAPE-3, for protein structure computation
* D. E. Shaw Research Anton, for simulating molecular dynamics [3]

[edit] The fastest supercomputers today
[edit] Measuring supercomputer speed
14 countries account for the vast majority of the world's 500 fastest supercomputers, with over half being located in the United States.

In general, the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is based on a particular benchmark, which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.

"Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).
[edit] The TOP500 list
Main article: TOP500

Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
[edit] Current fastest supercomputer system

In November 2009, the AMD Opteron-based Cray XT5 Jaguar at the Oak Ridge National Laboratory was announced as the fastest operational supercomputer, with a sustained processing rate of 1.759 PFLOPS.[4] [5]
[edit] Quasi-supercomputing
A Blue Gene/P node card

Some types of large-scale distributed computing for embarrassingly parallel problems take the clustered supercomputing concept to an extreme.

The fastest cluster, Folding@home, reported over 7.8 petaflops of processing power as of December 2009. Of this, 2.3 petaflops of this processing power is contributed by clients running on PlayStation 3 systems and another 5.1 petaflops is contributed by their newly released GPU2 client.[6]

Another distributed computing project is the BOINC platform, which hosts a number of distributed computing projects. As of April 2010[update], BOINC recorded a processing power of over 5 petaflops through over 580,000 active computers on the network.[7] The most active project (measured by computational power), MilkyWay@home, reports processing power of over 1.4 petaflops through over 30,000 active computers.[8]

As of April 2010[update], GIMPS's distributed Mersenne Prime search currently achieves about 45 teraflops.[9]

Also a “quasi-supercomputer” is Google's search engine system with estimated total processing power of between 126 and 316 teraflops, as of April 2004.[10] In June 2006 the New York Times estimated that the Googleplex and its server farms contain 450,000 servers.[11] According to recent estimations, the processing power of Google's cluster might reach from 20 to 100 petaflops.[12]

The PlayStation 3 Gravity Grid uses a network of 16 machines, and exploits the Cell processor for the intended application, which is binary black hole coalescence using perturbation theory.[13][14] The Cell processor has a main CPU and 6 floating-point vector processors, giving the machine a net of 16 general-purpose machines and 96 vector processors. The machine has a one-time cost of $9,000 to build and is adequate for black-hole simulations, which would otherwise cost $6,000 per run on a conventional supercomputer. The black hole calculations are not memory-intensive and are locally introduced, and so are well-suited to this architecture.

Other notable computer clusters are the flash mob cluster and the Beowulf cluster. The flash mob cluster allows the use of any computer in the network, while the Beowulf cluster still requires uniform architecture.
[edit] Research and development

IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".

Other PFLOPS projects include one by Narendra Karmarkar in India,[15] a CDAC effort targeted for 2010,[16] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).[17]

In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012.[18] Meanwhile, IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, which is scheduled to go online in 2011.

Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019.[19]

Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two week time span accurately.[20] Such systems might be built around 2030.[21]
[edit] Timeline of supercomputers

This is a list of the record-holders for fastest general-purpose supercomputer in the world, and the year each one set the record. For entries prior to 1993, this list refers to various sources[22][citation needed]. From 1993 to present, the list reflects the Top500 listing[23], and the "Peak speed" is given as the "Rmax" rating.
Year Supercomputer Peak speed
(Rmax) Location
1938 Zuse Z1 1 OPS Konrad Zuse, Berlin, Germany
1941 Zuse Z3 20 OPS Konrad Zuse, Berlin, Germany
1943 Colossus 1 5 kOPS Post Office Research Station, Bletchley Park, UK
1944 Colossus 2 (Single Processor) 25 kOPS Post Office Research Station, Bletchley Park, UK
1946 Colossus 2 (Parallel Processor) 50 kOPS Post Office Research Station, Bletchley Park, UK
1946
UPenn ENIAC
(before 1948+ modifications) 5 kOPS Department of War
Aberdeen Proving Ground, Maryland, USA
1954 IBM NORC 67 kOPS Department of Defense
U.S. Naval Proving Ground, Dahlgren, Virginia, USA
1956 MIT TX-0 83 kOPS Massachusetts Inst. of Technology, Lexington, Massachusetts, USA
1958 IBM AN/FSQ-7 400 kOPS 25 U.S. Air Force sites across the continental USA and 1 site in Canada (52 computers)
1960 UNIVAC LARC 250 kFLOPS Atomic Energy Commission (AEC)
Lawrence Livermore National Laboratory, California, USA
1961 IBM 7030 "Stretch" 1.2 MFLOPS AEC-Los Alamos National Laboratory, New Mexico, USA
1964 CDC 6600 3 MFLOPS AEC-Lawrence Livermore National Laboratory, California, USA
1969 CDC 7600 36 MFLOPS
1974 CDC STAR-100 100 MFLOPS
1975 Burroughs ILLIAC IV 150 MFLOPS NASA Ames Research Center, California, USA
1976 Cray-1 250 MFLOPS Energy Research and Development Administration (ERDA)
Los Alamos National Laboratory, New Mexico, USA (80+ sold worldwide)
1981 CDC Cyber 205 400 MFLOPS (~40 systems worldwide)
1983 Cray X-MP/4 941 MFLOPS U.S. Department of Energy (DoE)
Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Battelle; Boeing
1984 M-13 2.4 GFLOPS Scientific Research Institute of Computer Complexes, Moscow, USSR
1985 Cray-2/8 3.9 GFLOPS DoE-Lawrence Livermore National Laboratory, California, USA
1989 ETA10-G/8 10.3 GFLOPS Florida State University, Florida, USA
1990 NEC SX-3/44R 23.2 GFLOPS NEC Fuchu Plant, Fuchū,_Tokyo, Japan
1993 Thinking Machines CM-5/1024 59.7 GFLOPS DoE-Los Alamos National Laboratory; National Security Agency
Fujitsu Numerical Wind Tunnel 124.50 GFLOPS National Aerospace Laboratory, Tokyo, Japan
Intel Paragon XP/S 140 143.40 GFLOPS DoE-Sandia National Laboratories, New Mexico, USA
1994 Fujitsu Numerical Wind Tunnel 170.40 GFLOPS National Aerospace Laboratory, Tokyo, Japan
1996 Hitachi SR2201/1024 220.4 GFLOPS University of Tokyo, Japan
Hitachi/Tsukuba CP-PACS/2048 368.2 GFLOPS Center for Computational Physics, University of Tsukuba, Tsukuba, Japan
1997 Intel ASCI Red/9152 1.338 TFLOPS DoE-Sandia National Laboratories, New Mexico, USA
1999 Intel ASCI Red/9632 2.3796 TFLOPS
2000 IBM ASCI White 7.226 TFLOPS DoE-Lawrence Livermore National Laboratory, California, USA
2002 NEC Earth Simulator 35.86 TFLOPS Earth Simulator Center, Yokohama, Japan
2004 IBM Blue Gene/L 70.72 TFLOPS DoE/IBM Rochester, Minnesota, USA
2005 136.8 TFLOPS DoE/U.S. National Nuclear Security Administration,
Lawrence Livermore National Laboratory, California, USA
280.6 TFLOPS
2007 478.2 TFLOPS
2008 IBM Roadrunner 1.026 PFLOPS DoE-Los Alamos National Laboratory, New Mexico, USA
1.105 PFLOPS
2009 Cray Jaguar 1.759 PFLOPS DoE-Oak Ridge National Laboratory, Tennessee, USA
[edit] See also

* The Journal of Supercomputing

[edit] Notes

1. ^ a b Top500 OS chart
2. ^ IBM to build new monster supercomputer By Tom Jowitt , TechWorld , 02/04/2009
3. ^ D.E. Shaw Research Anton
4. ^ "Jaguar supercomputer races past Roadrunner in Top500". cnet.com. 15. http://news.cnet.com/8301-31021_3-10397627-260.html.
5. ^ "Oak Ridge 'Jaguar' Supercomputer Is World's Fastest". sciencedaily.com. 17. http://www.sciencedaily.com/releases/2009/11/091116204229.htm.
6. ^ Folding@home: OS Statistics, Stanford University, http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats, retrieved 2009-12-06
7. ^ BOINCstats: BOINC Combined, BOINC, http://www.boincstats.com/stats/project_graph.php?pr=bo, retrieved 2010-04-13 . Note these link will give current statistics, not those on the date last accessed.
8. ^ BOINCstats: MilkyWay@home, BOINC, http://boincstats.com/stats/project_graph.php?pr=milkyway, retrieved 2010-03-05 . Note these link will give current statistics, not those on the date last accessed.
9. ^ PrimeNet 5.0, http://mersenne.org/primenet, retrieved 2010-04-13
10. ^ How many Google machines, April 30, 2004
11. ^ Markoff, John; Hensell, Saul (June 14, 2006). "Hiding in Plain Sight, Google Seeks More Power". New York Times. http://www.nytimes.com/2006/06/14/technology/14search.html. Retrieved 2008-03-16.
12. ^ Google Surpasses Supercomputer Community, Unnoticed?, May 20, 2008.
13. ^ "PlayStation 3 tackles black hole vibrations", by Tariq Malik, January 28, 2009, MSNBC
14. ^ PlayStation3 Gravity Grid
15. ^ Athley, Gouri Agtey; Rajeshwari Adappa (30 October, 2006). ""Tatas get Karmakar to make super comp"". The Economic Times. http://economictimes.indiatimes.com/articleshow/msid-225517,curpg-2.cms. Retrieved 2008-03-16.
16. ^ C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010.[dead link]
17. ^ ""National Science Board Approves Funds for Petascale Computing Systems"". U.S. National Science Foundation. August 10, 2007. http://www.nsf.gov/news/news_summ.jsp?cntn_id=109850. Retrieved 2008-03-16.
18. ^ "NASA collaborates with Intel and SGI on forthcoming petaflops super computers". Heise online. 2008-05-09. http://www.heise.de/english/newsticker/news/107683.
19. ^ Thibodeau, Patrick (2008-06-10). "IBM breaks petaflop barrier". InfoWorld. http://www.infoworld.com/article/08/06/10/IBM_breaks_petaflop_barrier_1.html.
20. ^ DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. pp. 391–402. ISBN 1595930191. http://portal.acm.org/citation.cfm?id=1062325.
21. ^ "IDF: Intel says Moore's Law holds until 2029". Heise Online. 2008-04-04. http://www.heise.de/english/newsticker/news/106017.
22. ^ CDC timeline at Computer History Museum
23. ^ Directory page for Top500 lists. Result for each list since June 1993

[edit] External links

* Supercomputing at the Open Directory Project

Search Wikinews Wikinews has related news: Canada's supercomputer goes online
[show]
v • d • e
Parallel computing topics
General
Cloud computing · High-performance computing · Cluster computing · Distributed computing · Grid computing
Parallelism (levels)
Bit · Instruction · Data · Task
Threads
Superthreading · Hyperthreading
Theory
Amdahl's law · Gustafson's law · Cost efficiency · Karp-Flatt metric · slowdown · speedup
Elements
Process · Thread · Fiber · PRAM
Coordination
Multiprocessing · Multithreading · Memory coherency · Cache coherency · Barrier · Synchronization · Application checkpointing
Programming
Models (Implicit parallelism · Explicit parallelism · Concurrency) · Flynn's taxonomy (SISD • SIMD • MISD • MIMD (SPMD))
Hardware

Multiprocessing (Symmetric · Asymmetric) · Memory (NUMA · COMA · distributed · shared · distributed shared) · SMT
MPP · Superscalar · Vector processor · Supercomputer · Beowulf
APIs
POSIX Threads · OpenMP · PVM · MPI · UPC · Intel Threading Building Blocks · Boost.Thread · Global Arrays · Charm++ · Cilk · Co-array Fortran · CUDA · FastFlow
Problems
Embarrassingly parallel · Grand Challenge · Software lockout · Scalability · Race conditions · Deadlock · Deterministic algorithm
[show]
v • d • e
Computer sizes
Classes of computers
Larger
Super · Minisuper · Mainframe · Mini (Midrange) · Supermini · Server
Micro
Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console
Mobile
Portable/Mobile data terminal · Electronic organizer · Pocket computer
Laptop

Desktop replacement computer · Subnotebook (Netbook · Smartbook)
Tablet computer

Tablet PC (Ultra-Mobile PC) · Mobile internet device (Internet Tablet)
Wearable computer

Calculator watch · Virtual retinal display · Head-mounted display (Head-up display)
Information appliance

PDA (Palm-size PC · Handheld PC · Pocket PC) · EDA · Mobile phone (Smartphone · Feature phone) · PMP · DAP · E-book reader · Handheld game console
Calculators

Scientific · Programmable · Graphing
Others
Single-board computer · Wireless sensor network · Microcontroller · Smartdust · Nanocomputer
Retrieved from "http://en.wikipedia.org/wiki/Supercomputer"
Categories: Cluster computing | Parallel computing | Concurrent computing | Distributed computing architecture | Supercomputers
Hidden categories: All articles with dead external links | Articles with dead external links from March 2008 | Articles containing potentially dated statements from May 2010 | All articles containing potentially dated statements | Articles needing additional references from July 2008 | All articles needing additional references | Articles containing potentially dated statements from April 2010 | All articles with unsourced statements | Articles with unsourced statements from February 2007
Personal tools

* New features
* Log in / create account

Namespaces

* Article
* Discussion

Variants

Views

* Read
* Edit
* View history

Actions

Search
Search
Navigation

* Main page
* Contents
* Featured content
* Current events
* Random article

Interaction

* About Wikipedia
* Community portal
* Recent changes
* Contact Wikipedia
* Donate to Wikipedia
* Help

Toolbox

* What links here
* Related changes
* Upload file
* Special pages
* Permanent link
* Cite this page

Print/export

* Create a book
* Download as PDF
* Printable version

Languages

* Alemannisch
* العربية
* Azərbaycan
* Български
* Català
* Česky
* Dansk
* Deutsch
* Ελληνικά
* Español
* Euskara
* فارسی
* Français
* 한 국어
* Հայերեն
* Bahasa Indonesia
* Italiano
* עברית
* ქართული
* ລາວ
* Lietuvių
* Lumbaart
* Malagasy
* മലയാളം
* मराठी
* Bahasa Melayu
* Nederlands
* नेपाली
* 日本語
* ‪Norsk (bokmål)‬
* Polski
* Português
* Română
* Русский
* Sicilianu
* Simple English
* Slovenčina
* Slovenščina
* Српски / Srpski
* Suomi
* Svenska
* தமிழ்
* ไทย
* Türkçe
* Українська
* Tiếng Việt
* 粵語
* 中文

* This page was last modified on 5 June 2010 at 18:22.
* Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
* Contact us

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License