Parallel Compiler

Parallel Compiler. He is a translator who transforms high-level language source texts into low-level languages. A compiler typically operates in phases, each performing a task on the source program. It is enhanced so that, for the first time, you can discover parallelism in a much broader class of programs, in which multiple tasks can be run simultaneously.

Summary

[ hide ]

  • 1 Requirements
  • 2 Functions
  • 3 Classification
  • 4 Importance
  • 5 Source

Requirements

To carry out a parallel task, not only having a parallel microprocessor and parallel programming, a parallelizing Compiler is also necessary.

The programmable part is decisive in the parallelization since if the design of the programs is not designed from this end and is complicated for the purposes, it is impossible to achieve the substantial use that parallelizing a computer task means.

Depending on each language, it will be the compiler and the speed increases for two fundamental reasons. The tasks are parallel and the compiled programs, being in lower level code, run faster.

Features

Since the user only needs to compile and run the numerical models, the automatic parallelization option is an attractive choice in scientific computing. There are two options for parallel processing. One of them is Parallel Programming and the other is Automatic Parallelization . The first is to design parallel codes by the programmer and then implement them on Parallel Computers, so that the user controls the degree of parallelization. That is, the performance of parallel programming depends on the effort of the programmer and the architecture of the parallel computer .

Classification

Implicit parallelism:

  • It is programmed in sequential language and the compiler is responsible for parallelizing and allocating resources.
  • Little use (depends on the intelligence of the Compiler)
  • The programmer’s job is easy
  • All existing sequential code is exploited.

Explicit parallelism:

  • Parallel programming dialects are used.
  • Better use of the parallel possibilities of the machine
  • More work for the developer

Importance

Instead, automatic parallelization, a particular case of compilation, is a trade-off in which the source code is sequential and the receiving code produced by the compiler is parallel. A compiler for parallelizing programs parses the source code in much more detail than a conventional compiler. If the compiler performs efficient automatic parallelization, the program execution time is significantly reduced, a sign that the application parallelism was taken advantage of. As the performance to be achieved with automatic parallelization compilers is not known a priori in a given program, the user needs to search for the best option offered by the computer. In other words, the performance of automatic parallelization depends on the code to parallelize and the compiler.

 

by Abdullah Sam
I’m a teacher, researcher and writer. I write about study subjects to improve the learning of college and university students. I write top Quality study notes Mostly, Tech, Games, Education, And Solutions/Tips and Tricks. I am a person who helps students to acquire knowledge, competence or virtue.

Leave a Comment