August 02, 2004
Grid Computing
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

Introduction


Nearly twenty years ago I was running the development organization of a CAE startup. One of the programmers had a fascination with fractals that were receiving a lot of press at the time. In particular, using certain mathematical formula one could generate some very intriguing pictures. Unfortunately, it took and still takes considerable computing timing to generate these images. Back then the IBM PC AT was our development environment. A very anemic machine by today's standards in terms of CPU horsepower and available memory and disk storage capacity and speed. We were using SCO UNIX as our operating system. This ingenious programmer found a way to distribute the execution of
his fractal program in a way that tapped the unused CPU cycles in our network of computer and to generate his pictures.


In general to improve the speed of a single computer one looks to a more powerful CPU, faster memory and disk devices, configurations with greater memory capacity, memory caching and so forth. One can also look to special purpose co-processors to off load the CPU. Twenty years ago, the PC AT had an optional floating point co-processor. At that time supercomputers were typically array processors. They operated (add, subtract, multiple, invert, …) on very large matrices in the same manner that conventional computers operated on integers. In order to use these array processors, a computer program had to be compiled and linked with an appropriate library of routines. This meant that
the
software vendor and the array processor vendor had to cooperate to provide versions to end users that would leverage the power of the array processor.


The startup I co-founded developed a box full of electronics that included both a computational engine and a graphics processor. We were able to provide interactive display and interrogation of complex images generated by modeling and analysis programs such as shaded and hidden line images of solid objects, contour stress plots, dynamic mode shapes and assembly sequences. While such capabilities were available on expensive engineering workstations, we were the first to provide them in the PC environment with our supercharged computer.


Another approach to improve performance is to employ multiple processors in a single machine. These multiple processors can be used to support multiple impendent program executions or to execute multiple subtasks within a single executing program.


The last approach and the topic of this week's editorial is “grid computing”. Here the multiple processors do not reside on a single machine but are distributed on numerous machines linked by a network.




Grid Computing


According to the Grid Computing Info Centre “Computational Grids enable the sharing, selection, and aggregation of a wide variety of geographically distributed computational resources (such as supercomputers, compute clusters, storage systems, data sources, instruments, people) and presents them as a single, unified resource for solving large-scale compute and data intensive computing applications This idea is analogous to electrical power network (grid) where power generators are distributed, but the users are able to access electric power without bothering about the source of energy and its location.”


Grid computing allows one to unite pools of servers, storage systems, and networks into a single large system so the power of multiple-systems resources can be delivered to a single user point for a specific purpose. To a user or an application, the system appears to be a single, enormous virtual computing system. Virtualization enables companies to balance the supply and demand of computing cycles and resources by providing users with a single, transparent, aggregated source of computing power.


Applications that can benefit from grid computing include computation intensive applications such as simulation and analysis, data intensive applications such as experimental data and image/sensor analysis, and distributed collaboration such as online instrumentation, remote visualization and engineering. High-throughput computing and on demand computing to meet peak resource requirements will also benefit.


Grid computing must deal effectively and efficiently with issues of security, workload management, scheduling, data management and resource management.




Examples of Grid Computing


A well publicized use of distributed computing is connected with SETI, or the Search for Extraterrestrial Intelligence, a scientific effort seeking to determine if there is intelligent life outside Earth. SETI researchers use many methods. One popular method, radio SETI, listens for artificial radio signals coming from other stars. UC Berkeley has the tasks of analyzing vast quantities of radio data from the Arecibo Observatory in Puerto. SETI@home launched in May 1999 is a project that lets anyone with a computer and an Internet connection participate in this effort. Participants download a special screensaver. Every SETI@Home participant receives a "work unit" from the project's lab
(about 300 kilobytes of data), which is then processed by the PC whenever that user's machine is idle. Once the SETI@Home screensaver completes its analysis, the client then relays that processed information back to the lab at UC Berkeley. And when the analyzed data is successfully uploaded, the Space Sciences Lab sends out yet another work unit back to the participant's PC so that the process can be repeated.


A second example would be FightAIDS@Home, the first biomedical distributed computing project launched by Entropia. It is now run by the Olson Laboratory at the Scripps Research Institute, and uses idle computer resources to assist fundamental research to discover new drugs, using our growing knowledge of the structural biology of AIDS.


Another example is Genome@home that uses a computer algorithm based on the physical and biochemical rules by which genes and proteins behave, to design new proteins (and hence new genes) that have not been found in nature. By comparing these "virtual genomes" to those found in nature, we can gain a much better understanding of how natural genomes have evolved and how natural genes and proteins work.


Parabon Computation, Inc. is a commercial company that is building a similar distributed computing platform called Frontier using idle time on individual computers. A downloadable compute engine runs like a screen saver on a client machine, processing tasks only when the machine is idle. Results are uploaded to the server and new tasks downloaded. On July 13th 2000, Parabon launched its Compute Against Cancer program, in which it provides compute resources for analyzing the massive quantity of data collected by researchers to several research organizations searching for new and better cancer treatments. Parabon is now recruiting commercial clients. Parabon will pay for cpu usage or
payments can be donated to one of their nonprofit partners.




Government Programs related to Grid Computing


Started in 1997, the Partnership for Advanced Computational Infrastructure (PACI) is a program of the
NSF's Directorate for Computer and Information Science and Engineering (CISE). PACI is creating the foundation for meeting the expanding need for high-end computation and information technologies required by U.S. academic researchers. PACI partners contribute to the development of the information infrastructure by developing, applying and testing the necessary software, tools, and algorithms that contribute to the further growth of this "national grid" of interconnected high-performance computing systems.


PACI offers more than 22 high-performance computing systems that represent an unprecedented amount of computational resources made available by the NSF. The following are PACI's national partnerships and leading-edge sites: National Computational Science Alliance (Alliance), National Partnership for Advanced Computational Infrastructure (NPACI) and the Pittsburgh Supercomputing Center.


TeraGrid is a multi-year effort to build and deploy the world's largest, most comprehensive, distributed infrastructure for open scientific research. The TeraGrid project was launched by the NSF in August 2001 with $53 million in funding to four sites. By 2004, the TeraGrid will include 20 teraflops of computing power distributed at nine sites, facilities capable of managing and storing nearly 1 petabyte of data, high-resolution visualization
environments, and toolkits for grid computing. Four new TeraGrid sites, announced in September 2003, will add more scientific instruments, large datasets, and additional computing power and storage capacity to the
system. All the components will be tightly integrated and connected through a network that operates at 40 gigabits per second.


1 | 2 | 3  Next Page »


You can find the full EDACafe event calendar here.


To read more news, click here.



-- Jack Horgan, EDACafe.com Contributing Editor.




Review Article Be the first to review this article

EMA:

Featured Video
Editorial
Peggy AycinenaWhat Would Joe Do?
by Peggy Aycinena
Retail Therapy: Jump starting Black Friday
Peggy AycinenaIP Showcase
by Peggy Aycinena
REUSE 2016: Addressing the Four Freedoms
More Editorial  
Jobs
Manager, Field Applications Engineering for Real Intent at Sunnyvale, CA
Development Engineer-WEB SKILLS +++ for EDA Careers at North Valley, CA
ACCOUNT MANAGER MUNICH GERMANY EU for EDA Careers at MUNICH, Germany
AE-APPS SUPPORT/TMM for EDA Careers at San Jose-SOCAL-AZ, CA
Upcoming Events
Zuken Innovation World 2017, April 24 - 26, 2017, Hilton Head Marriott Resort & Spa in Hilton Head Island, SC at Hilton Head Marriott Resort & Spa Hilton Head Island NC - Apr 24 - 26, 2017
CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy