What’s Parallel Programming ?

What’s Parallel Programming ?

Parallel programming :    

It is a computation where several  execution or calculation are done simultaneously.large problems are divided into smaller ones and solved at the same time.

some of you may have question like why do we need parallel programming ?

=>using multiple processors in parallel to solve problems more quickly than with a single processor.

 

Moore's Law:

 The observation made in 1965 by Gordon Moore, co-founder of  Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented.

Moore predicted that this trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the current definition of Moore's Law, which Moore himself has blessed. Most experts, including Moore himself, expect Moore's Law to hold for at least another two decades.

 

parallel computing is important because we no longer live (and we haven't for quite some time) in a computing world where we can just sit around for a year or two and let Moore's law take care of our performance issues. 

The idea is that algorithms beat Moore's law. The idea was publicised by an academic blog Algorithmic Game-Theory/Economics which picked up on some interesting claims in a science policy report Report to the President and Congress: Designing a Digital Future: Federally FUnded R&D in Networking and IT.

It sounds good and as programmers we really should be putting our efforts into propagating the myth that software wins out over hardware - but, of course, we know the truth.

Google Will Do Anything To Beat Moore’s Law

The secret is to ride Moore’s Law for the electronic components of the systems, but then to keep doing other things on top of that. “If you look at our cost or energy per query, we are doing better than Moore’s Law because we have figured out better ways to optimize compute or storage at the application level,” Hölzle explained to The Next Platform. “So that has gone down more strongly than Moore’s Law.”

 

I just gave you the history and present work that are going on ,Coming back to our topic ,

There are several different forms of parallel computing:

  • Bit Level Parallelism:  It is a form of parallelism which is based on increasing processors word size. It shortens the no. of instructions that the system must run in order to perform a task on variables which are greater in size.

 

  • Instruction Level Parallelism:   

    It is a form of parallel computing in which we can calculate the amount of operation carried out by an operating system at same time. For example

    1. Instruction pipelining.
    2. Out of order execution.
    3. Register renaming.
    4. Speculative execution.
    5. Branch prediction.

     

  • Task Parallelism:

    Task Parallelism is a form of parallelization in which different processors run the program among different codes of distribution. It is also called as Function Parallelism.

 

Prose of Parallel Computing:

    • Save Time/Money: Parallel usage of more resources shortens the task completion time, with potential cost saving. Parallel clusters can be constructed from cheap components.
    • Solve Larger Problems: Complex and large problems that are impractical to solve by a single computer especially with limited memory. For example; 1.Problem requiring Peta FLOPS and Peta Bytes. 2. Web search engines/million of transactions per section in data base processing.
    • Concurrency: Multiple computing resources can be doing simultaneous things, as compared to single computer resource. For example Access Grid (Provides global collaboration network virtually).
    • Support Non Local Resources: Network wide computer resources can be utilized in scarcity at local resources.

Cons of Parallel Computing:

  • Transmission Speed:  Transmission speed is relatively low as depends upon, how fast data can move through hardware. Transmission media limitations such as (limit of copper wire 9cm/nanosecond) make data transmission low.
  • Difficult Programming:    It is difficult to write Algorithms and computer programs supporting parallel computing as, it requires integration of complex instructions. Only people with enough knowledge can code program well.
  • Communication and Synchronization: Communication and synchronization between the sub tasks are typically one of the greatest obstacles to get good parallel program performance.

     

 

Java parallel programming:

 

parallel programming in python:

 

I found this interesting comic conversation while i was researching check out:- 

Parallel Programming on the Macintosh:

Why parallel program on the Macintosh?  

 

 

 

Future of Parallel Computing:

 It is expected to lead to other major changes in the industry. Major companies like INTEL Corp and Advanced Micro Devices Inc has already integrated four processors in a single chip.

Now what needed is the simultaneous translation and break through in technologies, the race for results in parallel computing is in full swing. Another great challenge is to write a software program to divide computer processors into chunks.

This could only be done with the new programming language to revolutionize the every piece of software written. Parallel computing may change the way computer work in the future .

 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics