POJ1502_MPI Maelstrom_最短路::朴素的dijkstra
来源:互联网 发布:旺旺名是淘宝名吗 编辑:程序博客网 时间:2024/05/22 03:38
MPI Maelstrom
Time Limit: 1000MS Memory Limit: 10000KTotal Submissions: 9173 Accepted: 5613
Description
BIT has recently taken delivery of their new supercomputer, a 32 processor Apollo Odyssey distributed shared memory machine with a hierarchical communication subsystem. Valentine McKee's research advisor, Jack Swigert, has asked her to benchmark the new system.
``Since the Apollo is a distributed shared memory machine, memory access and communication times are not uniform,'' Valentine told Swigert. ``Communication is fast between processors that share the same memory subsystem, but it is slower between processors that are not on the same subsystem. Communication between the Apollo and machines in our lab is slower yet.''
``How is Apollo's port of the Message Passing Interface (MPI) working out?'' Swigert asked.
``Not so well,'' Valentine replied. ``To do a broadcast of a message from one processor to all the other n-1 processors, they just do a sequence of n-1 sends. That really serializes things and kills the performance.''
``Is there anything you can do to fix that?''
``Yes,'' smiled Valentine. ``There is. Once the first processor has sent the message to another, those two can then send messages to two other hosts at the same time. Then there will be four hosts that can send, and so on.''
``Ah, so you can do the broadcast as a binary tree!''
``Not really a binary tree -- there are some particular features of our network that we should exploit. The interface cards we have allow each processor to simultaneously send messages to any number of the other processors connected to it. However, the messages don't necessarily arrive at the destinations at the same time -- there is a communication cost involved. In general, we need to take into account the communication costs for each link in our network topologies and plan accordingly to minimize the total time required to do a broadcast.''
``Since the Apollo is a distributed shared memory machine, memory access and communication times are not uniform,'' Valentine told Swigert. ``Communication is fast between processors that share the same memory subsystem, but it is slower between processors that are not on the same subsystem. Communication between the Apollo and machines in our lab is slower yet.''
``How is Apollo's port of the Message Passing Interface (MPI) working out?'' Swigert asked.
``Not so well,'' Valentine replied. ``To do a broadcast of a message from one processor to all the other n-1 processors, they just do a sequence of n-1 sends. That really serializes things and kills the performance.''
``Is there anything you can do to fix that?''
``Yes,'' smiled Valentine. ``There is. Once the first processor has sent the message to another, those two can then send messages to two other hosts at the same time. Then there will be four hosts that can send, and so on.''
``Ah, so you can do the broadcast as a binary tree!''
``Not really a binary tree -- there are some particular features of our network that we should exploit. The interface cards we have allow each processor to simultaneously send messages to any number of the other processors connected to it. However, the messages don't necessarily arrive at the destinations at the same time -- there is a communication cost involved. In general, we need to take into account the communication costs for each link in our network topologies and plan accordingly to minimize the total time required to do a broadcast.''
Input
The input will describe the topology of a network connecting n processors. The first line of the input will be n, the number of processors, such that 1 <= n <= 100.
The rest of the input defines an adjacency matrix, A. The adjacency matrix is square and of size n x n. Each of its entries will be either an integer or the character x. The value of A(i,j) indicates the expense of sending a message directly from node i to node j. A value of x for A(i,j) indicates that a message cannot be sent directly from node i to node j.
Note that for a node to send a message to itself does not require network communication, so A(i,i) = 0 for 1 <= i <= n. Also, you may assume that the network is undirected (messages can go in either direction with equal overhead), so that A(i,j) = A(j,i). Thus only the entries on the (strictly) lower triangular portion of A will be supplied.
The input to your program will be the lower triangular section of A. That is, the second line of input will contain one entry, A(2,1). The next line will contain two entries, A(3,1) and A(3,2), and so on.
The rest of the input defines an adjacency matrix, A. The adjacency matrix is square and of size n x n. Each of its entries will be either an integer or the character x. The value of A(i,j) indicates the expense of sending a message directly from node i to node j. A value of x for A(i,j) indicates that a message cannot be sent directly from node i to node j.
Note that for a node to send a message to itself does not require network communication, so A(i,i) = 0 for 1 <= i <= n. Also, you may assume that the network is undirected (messages can go in either direction with equal overhead), so that A(i,j) = A(j,i). Thus only the entries on the (strictly) lower triangular portion of A will be supplied.
The input to your program will be the lower triangular section of A. That is, the second line of input will contain one entry, A(2,1). The next line will contain two entries, A(3,1) and A(3,2), and so on.
Output
Your program should output the minimum communication time required to broadcast a message from the first processor to all the other processors.
Sample Input
55030 5100 20 5010 x x 10
Sample Output
35
大致题意:
从1号节点出发,向其他所有节点传递消息,同时可以向无限个节点传送消息。并且,接收到消息的点也可以立即向无数个没有接收到消息的点传递消息。问消息传遍整个网络的最小时间。
大体思路:
迪杰斯特算法 求起点到所有点的最短路。这些最短路中最大的那个就是答案。
#include<cstdio>#include<iostream>#include<cstdlib>using namespace std;#define _max 2147483647int Map [105][105];//地图记录int D [105];//起点到所有点的已知最短路bool Dis [105];//标记点是已是最短int N;int main (){scanf("%d",&N);char str [10];for(int i=2; i<=N; i++)for(int j=1; j<i; j++){scanf(" %s",str);//点的信息以字符串的形式输入if(*str != 'x') Map[j][i] = Map[i][j] = atoi(str);//双向边只输入一次//字符串转整数}/*******初始化D*******/for(int i=1; i<=N; i++)if(Map[1][i] != 0) D[i] = Map[1][i];else D[i] = _max;/*******迪杰斯特算法*******/for(int m=1; m<=N; m++){/****找到一个当前距起点最近的点 来利用****/int Min = _max, k;for(int i=1; i<=N; i++)if(! Dis[i] && D[i] < Min) Min = D[i], k = i;//如果dis为真,说明已经利用过了Dis[k] = 1;//能被利用,说明这个点已经达到了可能的最小值,标记为真for(int i=1; i<=N; i++)//遍历并标记if(Map[k][i] != 0 && D[i] > D[k] + Map[k][i])D[i] = D[k] + Map[k][i];}int Max = 0;for(int i=1; i<=N; i++)if(Max<D[i]) Max = D[i];printf("%d\n",Max);return 0;}
0 0
- POJ1502_MPI Maelstrom_最短路::朴素的dijkstra
- 最短路算法(Dijkstra朴素版)
- POJ1502_MPI Maelstrom(Dijkstra)(floyd)
- [ZZ]最短路的算法---Dijkstra算法
- 最短路 dijkstra 权值非负的单源最短路径
- POJ1062 昂贵的聘礼(dijkstra最短路)
- HDU2066-一个人的旅行-最短路(dijkstra)
- 昂贵的聘礼 最短路 dijkstra
- Floyd和Dijkstra的最短路
- 最短路 Dijkstra算法
- 最短路之dijkstra
- hdu1874 Dijkstra 最短路
- HUD最短路 (Dijkstra)
- hdu2544 Dijkstra最短路
- 最短路之Dijkstra
- 图论 最短路 Dijkstra
- Dijkstra 求最短路
- dijkstra最短路 hdu2066
- CSS3形变——transform与transform-origin画时钟
- spring学习总结
- 第一篇博文
- 获取硬盘分区表的格式--是gpt还是mbr
- jQuery各种选择器、节点、事件,删除、复制、插入元素的使用方法总结
- POJ1502_MPI Maelstrom_最短路::朴素的dijkstra
- 剑指Offer:合并两个排序的链表
- 对DTO的理解
- 小白记录~学习JAVA一个多月 使用js写出简易版贪吃蛇
- FPGA学习之元件例化
- Android UI 手机信息页面 国际化
- linux特殊符号大全
- 对DTO的理解
- 前端开发大致总结