24/05/2018, 17:30

Graphs

(From Wikipedia, the free encyclopedia) A drawing of a graph In mathematics and computer science, graph theory is the study of graphs; mathematical structures used to model pairwise relations ...

(From Wikipedia, the free encyclopedia)

A drawing of a graph

In mathematics and computer science, graph theory is the study of graphs; mathematical structures used to model pairwise relations between objects from a certain collection. A "graph" in this context refers to a collection of vertices or 'nodes' and a collection of edges that connect pairs of vertices. A graph may be undirected, meaning that there is no distinction between the two vertices associated with each edge, or its edges may be directed from one vertex to another; see graph (mathematics) for more detailed definitions and for other variations in the types of graphs that are commonly considered. The graphs studied in graph theory should not be confused with "graphs of functions" and other kinds of graphs.

History

The paper written by Leonhard Euler on the Seven Bridges of Königsberg and published in 1736 is regarded as the first paper in the history of graph theory. This paper, as well as the one written by Vandermonde on the knight problem carried on with the analysis situs initiated by Leibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy and L'Huillier, and is at the origin of topology.

More than one century after Euler's paper on the bridges of Königsberg and while Listing introduced topology, Cayley was led by the study of particular analytical forms arising from differential calculus to study a particular class of graphs, the trees. This study had many implications in theoretical chemistry. The involved techniques mainly concerned the enumeration of graphs having particular properties. Enumerative graph theory then rose from the results of Cayley and the fundamental results published by Pólya between 1935 and 1937 and the generalization of these by De Bruijn in 1959. Cayley linked his results on trees with the contemporary studies of chemical composition. The fusion of the ideas coming from mathematics with those coming from chemistry is at the origin of a part of the standard terminology of graph theory. In particular, the term graph was introduced by Sylvester in a paper published in 1878 in Nature.

One of the most famous and productive problems of graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?". This problem remained unsolved for more than a century and the proof given by Kenneth Appel and Wolfgang Haken in 1976 (determination of 1936 types of configurations of which study is sufficient and checking of the properties of these configurations by computer) did not convince all the community. A simpler proof considering far fewer configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas.

This problem was first posed by Francis Guthrie in 1852 and the first written record of this problem is a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger has in particular led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and Kőnig. The works of Ramsey on colorations and more specially the results obtained by Turán in 1941 is at the origin of another branch of graph theory, the extremal graph theory.

The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff's circuit laws for calculating the voltage and current in electric circuits.

The introduction of probabilistic methods in graph theory, specially in the study of Erdős and Rényi of the asymptotic probability of graph connexity is at the origin of yet another branch, known as random graph theory. Research in this branch has enabled mathematicians across the globe to advance the theory of graphs significantly.

Drawing graphs

are represented graphically by drawing a dot for every vertex, and drawing an arc between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow.

A graph drawing should not be confused with the graph itself (the abstract, non-graphical structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others.

Graph-theoretic data structures

There are different ways to store graphs in a computer system. The data structure used depends on both the graph structure and the algorithm used for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred for sparse graphs as they have smaller memory requirements. Matrix structures on the other hand provide faster access for some applications but can consume huge amounts of memory .

List structures

  • Incidence list - The edges are represented by an array containing pairs (ordered if directed) of vertices (that the edge connects) and possibly weight and other data.
  • Adjacency list - Much like the incidence list, each vertex has a list of which vertices it is adjacent to. This causes redundancy in an undirected graph: for example, if vertices A and B are adjacent, A's adjacency list contains B, while B's list contains A. Adjacency queries are faster, at the cost of extra storage space.

Matrix structures

  • Incidence matrix - The graph is represented by a matrix of E (edges) by V (vertices), where [edge, vertex] contains the edge's data (simplest case: 1 - connected, 0 - not connected).
  • Adjacency matrix - there is an N by N matrix, where N is the number of vertices in the graph. If there is an edge from some vertex x to some vertex y, then the element Mx,y is 1, otherwise it is 0. This makes it easier to find subgraphs, and to reverse graphs if needed.
  • Laplacian matrix or Kirchhoff matrix or Admittance matrix - is defined as degree matrix minus adjacency matrix and thus contains adjacency information and degree information about the vertices
  • Distance matrix - A symmetric N by N matrix an element Mx,y of which is the length of shortest path between x and y; if there is no such path Mx,y = infinity. It can be derived from powers of the Adjacency matrix.

Problems in graph theory

Enumeration

There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).

Subgraphs, induced subgraphs, and minors

A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are hereditary for subgraphs, which means that a graph has the property if and only if all subgraphs, or all induced subgraphs, have it too. Unfortunately, finding maximal subgraphs of a certain kind is often an NP-complete problem.

  • Finding the largest complete graph is called the clique problem (NP-complete).

A similar problem is finding induced subgraphs in a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example,

  • Finding the largest edgeless induced subgraph, or independent set, called the independent set problem (NP-complete).

Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. A famous example:

  • A graph is planar if it contains as a minor neither the complete bipartite graph K3,3 (See the Three cottage problem) nor the complete graph K5.

Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by their point-deleted subgraphs, for example:

  • The reconstruction conjecture

Graph coloring

Many problems have to do with various ways of coloring graphs, for example:

  • The four-color theorem
  • The strong perfect graph theorem
  • The Erdős-Faber-Lovász conjecture (unsolved)
  • The total coloring conjecture (unsolved)
  • The list coloring conjecture (unsolved)

Route problems

  • Hamiltonian path and cycle problems
  • Minimum spanning tree
  • Route inspection problem (also called the "Chinese Postman Problem")
  • Seven Bridges of Königsberg
  • Shortest path problem
  • Steiner tree
  • Three cottage problem
  • Traveling salesman problem (NP-Complete)

Network flow

There are numerous problems arising especially from applications that have to do with various notions of flows in networks, for example:

Visibility graph problems

  • Museum guard problem

Covering problems

Covering problems are specific instances of subgraph-finding problems, and they tend to be closely related to the clique problem or the independent set problem.

  • Set cover problem
  • Vertex cover problem

Applications

Applications of graph theory are primarily, but not exclusively, concerned with labeled graphs and various specializations of these.

Structures that can be represented as graphs are ubiquitous, and many problems of practical interest can be represented by graphs. The link structure of a website could be represented by a directed graph: the vertices are the web pages available at the website and a directed edge from page A to page B exists if and only if A contains a link to B. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science.

A graph structure can be extended by assigning a weight to each edge of the graph. with weights, or weighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example if a graph represents a road network, the weights could represent the length of each road. A digraph with weighted edges in the context of graph theory is called a network.

Networks have many uses in the practical side of graph theory, network analysis (for example, to model and analyze traffic networks). Within network analysis, the definition of the term "network" varies, and may often refer to a simple graph.

Many applications of graph theory exist in the form of network analysis. These split broadly into two categories. Firstly, analysis to determine structural properties of a network, such as the distribution of vertex degrees and the diameter of the graph. A vast number of graph measures exist, and the production of useful ones for various domains remains an active area of research. Secondly, analysis to find a measurable quantity within the network, for example, for a transportation network, the level of vehicular flow within any portion of it.

Graph theory is also used to study molecules in chemistry and physics. In condensed matter physics, the three dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. For example, Franzblau's shortest-path (SP) rings.

Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige or to explore diffusion mechanisms, notably through the use of social network analysis software.

Boruvska’s algorithms

(From Wikipedia, the free encyclopedia)

Borůvka's algorithm is an algorithm for finding a minimum spanning tree in a graph for which all edge weights are distinct.

It was first published in 1926 by Otakar Borůvka as a method of constructing an efficient electricity network for Moravia. The algorithm was rediscovered by Choquet in 1938; again by Florek, Łukasiewicz, Perkal, Steinhaus, and Zubrzycki in 1951; and again by Sollin some time in the early 1960s. Because Sollin was the only Western computer scientist in this list, this algorithm is frequently called Sollin's algorithm, especially in the parallel computing literature.

The algorithm begins by examining each vertex and adding the cheapest edge from that vertex to another in the graph, without regard to already added edges, and continues joining these groupings in a like manner until a tree spanning all vertices is completed. Designating each vertex or set of connected vertices a "component", pseudocode for Borůvka's algorithm is:

  • Begin with a connected graph G containing edges of distinct weights, and an empty set of edges T
  • While the vertices of G connected by T are disjoint:
    • Begin with an empty set of edges E
    • For each component:
      • Begin with an empty set of edges S
      • For each vertex in the component:
        • Add the cheapest edge from the vertex in the component to another vertex in a disjoint component to S
      • Add the cheapest edge in S to E
    • Add the resulting set of edges E to T.
  • The resulting set of edges T is the minimum spanning tree of G

Borůvka's algorithm can be shown to run in time O(Elog V), where E is the number of edges, and V is the number of vertices in G.

Other algorithms for this problem include Prim's algorithm (actually discovered by Vojtěch Jarník) and Kruskal's algorithm. Faster algorithms can be obtained by combining Prim's algorithm with Borůvka's. A faster randomized version of Borůvka's algorithm due to Karger, Klein, and Tarjan runs in expected O(E) time. The best known (deterministic) minimum spanning tree algorithm by Bernard Chazelle is based on Borůvka's and runs in O(E α(V)) time, where α is the inverse of the Ackermann function.

Kruskal’s algorithms

(From Wikipedia, the free encyclopedia)

Kruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. If the graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for each connected component). Kruskal's algorithm is an example of a greedy algorithm.

Kruskal's algorithm

It works as follows:

  • create a forest F (a set of trees), where each vertex in the graph is a separate tree
  • create a set S containing all the edges in the graph
  • while S is nonempty
    • remove an edge with minimum weight from S
    • if that edge connects two different trees, then add it to the forest, combining two trees into a single tree
    • otherwise discard that edge

At the termination of the algorithm, the forest has only one component and forms a minimum spanning tree of the graph.

This algorithm first appeared in Proceedings of the American Mathematical Society, pp. 48–50 in 1956, and was written by Joseph Kruskal.

Performance

Where E is the number of edges in the graph and V is the number of vertices, Kruskal's algorithm can be shown to run in O(E log E) time, or equivalently, O(E log V) time, all with simple data structures. These running times are equivalent because:

  • E is at most V2 and logV2 = 2logV is O(log V).
  • If we ignore isolated vertices, which will each be their own component of the minimum spanning tree anyway, V ≤ 2E, so log V is O(log E).

We can achieve this bound as follows: first sort the edges by weight using a comparison sort in O(E log E) time; this allows the step "remove an edge with minimum weight from S" to operate in constant time. Next, we use a disjoint-set data structure to keep track of which vertices are in which components. We need to perform O(E) operations, two 'find' operations and possibly one union for each edge. Even a simple disjoint-set data structure such as disjoint-set forests with union by rank can perform O(E) operations in O(E log V) time. Thus the total time is O(E log E) = O(E log V).

Provided that the edges are either already sorted or can be sorted in linear time (for example with counting sort or radix sort), the algorithm can use more sophisticated disjoint-set data structures to run in O(E α(V)) time, where α is the extremely slowly-growing inverse of the single-valued Ackermann function.

Example

This is our original graph. The numbers near the arcs indicate their weight. None of the arcs are highlighted.
AD and CE are the shortest arcs, with length 5, and AD has been arbitrarily chosen, so it is highlighted.
However, CE is now the shortest arc that does not form a loop, with length 5, so it is highlighted as the second arc.
The next arc, DF with length 6, is highlighted using much the same method.
The next-shortest arcs are AB and BE, both with length 7. AB is chosen arbitrarily, and is highlighted. The arc BD has been highlighted in red, because it would form a loop ABD if it were chosen.
The process continutes to highlight the next-smallest arc, BE with length 7. Many more arcs are highlighted in red at this stage: BC because it would form the loop BCE, DE because it would form the loop DEBA, and FE because it would form FEBAD.
Finally, the process finishes with the arc EG of length 9, and the minimum spanning tree is found.

Proof of correctness

Let P be a connected, weighted graph and let Y be the subgraph of P produced by the algorithm. Y cannot have a cycle, since the last edge added to that cycle would have been within one subtree and not between two different trees. Y cannot be disconnected, since the first encountered edge that joins two components of Y would have been added by the algorithm. Thus, Y is a spanning tree of P.

It remains to show that the spanning tree Y is minimal:

Let Y1 be a minimum spanning tree. If Y = Y1 then Y is a minimum spanning tree. Otherwise, let e be the first edge considered by the algorithm that is in Y but not in Y1. has a cycle, because you cannot add an edge to a spanning tree and still have a tree. This cycle contains another edge f which at the stage of the algorithm where e is added to Y, has not been considered. This is because otherwise e would not connect different trees but two branches of the same tree. Then is also a spanning tree. Its total weight is less than or equal to the total weight of Y1. This is because the algorithm visits e before f and therefore . If the weights are equal, we consider the next edge e which is in Y but not in Y1. If there is no edge left, the weight of Y is equal to the weight of Y1 although they consist of a different edge set and Y is also a minimum spanning tree. In the case where the weight of Y2 is less than the weight of Y1 we can conclude that Y1 is not a minimum spanning tree, and the assumption that there exist edges e, f with w(e) < w(f) is incorrect. And therefore Y is a minimum spanning tree (equal to Y1 or with a different edge set, but with same weight).

Pseudocode

1 function Kruskal(G)

2 for each vertex v in G do

3 Define an elementary cluster C(v) ← {v}.

4 Initialize a priority queue Q to contain all edges in G, using the weights as keys.

5 Define a tree T ← Ø //T will ultimately contain the edges of the MST

6 // n is total number of vertices

7 while T has fewer than n-1 edges do

8 // edge u,v is the minimum weighted route from/to v

9 (u,v) ← Q.removeMin()

10 // prevent cycles in T. add u,v only if T does not already contain an edge consisting of u and v.

11 // Note that the cluster contains more than one vertex only if an edge containing a pair of

12 // the vertices has been added to the tree.

13 Let C(v) be the cluster containing v, and let C(u) be the cluster containing u.

14 if C(v) ≠ C(u) then

15 Add edge (v,u) to T.

16 Merge C(v) and C(u) into one cluster, that is, union C(v) and C(u).

17 return tree T

Jarnik-Prim’s algorithms

(From Wikipedia, the free encyclopedia)

Prim's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. The algorithm was discovered in 1930 by mathematician Vojtěch Jarník and later independently by computer scientist Robert C. Prim in 1957 and rediscovered by Dijkstra in 1959. Therefore it is sometimes called the DJP algorithm or Jarnik algorithm.

Description

The algorithm continuously increases the size of a tree starting with a single vertex until it spans all the vertices.

  • Input: A connected weighted graph G(V,E)
  • Initialize: V' = {x}, where x is an arbitrary node from V, E'= {}
  • repeat until V'=V:
    • Choose edge (u,v) from E with minimal weight such that u is in V' and v is not in V' (if there are multiple edges with the same weight, choose arbitrarily)
    • Add v to V', add (u,v) to E'
  • Output: G(V',E') is the minimal spanning tree

Time complexity

Minimum edge weight data structure Time complexity (total)
adjacency matrix, searching V^2
binary heap (as in pseudocode below) and adjacency list O((V + E) log(V)) = E log(V)
Fibonacci heap and adjacency list E + V log(V)

A simple implementation using an adjacency matrix graph representation and searching an array of weights to find the minimum weight edge to add requires O(V^2) running time. Using a simple binary heap data structure and an adjacency list representation, Prim's algorithm can be shown to run in time which is O(Elog V) where E is the number of edges and V is the number of vertices. Using a more sophisticated Fibonacci heap, this can be brought down to O(E + Vlog V), which is significantly faster when the graph is dense enough that E is Ω(Vlog V).

Example

Image Description Not seen Fringe Solution set
This is our original weighted graph. This is not a tree because the definition of a tree requires that there are no circuits and this diagram contains circuits. A more correct name for this diagram would be a graph or a network. The numbers near the arcs indicate their weight. None of the arcs are highlighted, and vertex D has been arbitrarily chosen as a starting point. C, G A, B, E, F D
The second chosen vertex is the vertex nearest to D: A is 5 away, B is 9, E is 15, and F is 6. Of these, 5 is the smallest, so we highlight the vertex A and the arc DA. C, G B, E, F A, D
The next vertex chosen is the vertex nearest to either D or A. B is 9 away from D and 7 away from A, E is 15, and F is 6. 6 is the smallest, so we highlight the vertex F and the arc DF. C B, E, G A, D, F
The algorithm carries on as above. Vertex B, which is 7 away from A, is highlighted. Here, the arc DB is highlighted in red, because both vertex B and vertex D have been highlighted, so it cannot be used. null C, E, G A, D, F, B
In this case, we can choose between C, E, and G. C is 8 away from B, E is 7 away from B, and G is 11 away from F. E is nearest, so we highlight the vertex E and the arc EB. Two other arcs have been highlighted in red, as both their joining vertices have been used. null C, G A, D, F, B, E
Here, the only vertices available are C and G. C is 5 away from E, and G is 9 away from E. C is chosen, so it is highlighted along with the arc EC. The arc BC is also highlighted in red. null G A, D, F, B, E, C
Vertex G is the only remaining vertex. It is 11 away from F, and 9 away from E. E is nearer, so we highlight it and the arc EG. Now all the vertices have been highlighted, the minimum spanning tree is shown in green. In this case, it has weight 39. null null A, D, F, B, E, C, G

Pseudo-code

Min-heap

Initialization

inputs: A graph, a function returning edge weights weight-function, and an initial vertex

initial placement of all vertices in the 'not yet seen' set, set initial vertex to be added to the tree, and place all vertices in a min-heap to allow for removal of the min distance from the minimum graph.

for each vertex in graph

set min_distance of vertex to ∞

set parent of vertex to null

set minimum_adjacency_list of vertex to empty list

set is_in_Q of vertex to true

set distance of initial vertex to zero

add to minimum-heap Q all vertices in graph.

Algorithm

In the algorithm description above,

nearest vertex is Q[0], now latest addition

fringe is v in Q where distance of v < ∞ after nearest vertex is removed

not seen is v in Q where distance of v = ∞ after nearest vertex is removed

The while loop will fail when remove minimum returns null. The adjacency list is set to allow a directional graph to be returned.

time complexity: V for loop, log(V) for the remove function

while latest_addition = remove minimum in Q

set is_in_Q of latest_addition to false

add latest_addition to (minimum_adjacency_list of (parent of latest_addition))

add (parent of latest_addition) to (minimum_adjacency_list of latest_addition)

time complexity: E/V, the average number of vertices

for each adjacent of latest_addition

if (is_in_Q of adjacent) and (weight-function(latest_addition, adjacent) < min_distance of adjacent)

set parent of adjacent to latest_addition

set min_distance of adjacent to weight-function(latest_addition, adjacent)

time complexity: log(V), the height of the heap

update adjacent in Q, order by min_distance

Proof of correctness

Let P be a connected, weighted graph. At every iteration of Prim's algorithm, an edge must be found that connects a vertex in a subgraph to a vertex outside the subgraph. Since P is connected, there will always be a path to every vertex. The output Y of Prim's algorithm is a tree, because the edge and vertex added to Y are connected. Let Y1 be a minimum spanning tree of P. If Y1=Y then Y is a minimum spanning tree. Otherwise, let e be the first edge added during the construction of Y that is not in Y1, and V be the set of vertices connected by the edges added before e. Then one endpoint of e is in V and the other is not. Since Y1 is a spanning tree of P, there is a path in Y1 joining the two endpoints. As one travels along the path, one must encounter an edge f joining a vertex in V to one that is not in V. Now, at the iteration when e was added to Y, f could also have been added and it would be added instead of e if its weight was less than e. Since f was not added, we conclude that

w(f) ≥ w(e).

Let Y2 be the graph obtained by removing f and adding e from Y1. It is easy to show that Y2 is connected, has the same number of edges as Y1, and the total weights of its edges is not larger than that of Y1, therefore it is also a minimum spanning tree of P and it contains e and all the edges added before it during the construction of V. Repeat the steps above and we will eventually obtain a minimum spanning tree of P that is identical to Y. This shows Y is a minimum spanning tree.

Properties of shortest paths

(From Wikipedia, the free encyclopedia)

In graph theory, the shortest path problem is the problem of finding a path between two vertices such that the sum of the weights of its constituent edges is minimized. An example is finding the quickest way to get from one location to another on a road map; in this case, the vertices represent locations and the edges represent segments of road and are weighted by the time needed to travel that segment.

Formally, given a weighted graph (that is, a set V of vertices, a set E of edges, and a real-valued weight function f : E → R), and one element v of V, find a path P from v to each v' of V so that

is minimal among all paths connecting v to v' .

Sometimes it is called the single-pair shortest path problem, to distinguish it from the following generalizations:

  • The single-source shortest path problem is a more general problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph.
  • The all-pairs shortest path problem is an even more general problem, in which we have to find shortest paths between every pair of vertices v, v' in the graph.

Both these generalizations have significantly more performant algorithms in practice than simply running a single-pair shortest path algorithm on all relevant pairs of vertices.

The most important algorithms for solving this problem are:

  • Dijkstra's algorithm — solves single source problem if all edge weights are greater than or equal to zero. Without worsening the run time, this algorithm can in fact compute the shortest paths from a given start point s to all other nodes.
  • Bellman-Ford algorithm — solves single source problem if edge weights may be negative.
  • A* search algorithm solves for single source shortest paths using heuristics to try to speed up the search
  • Floyd-Warshall algorithm — solves all pairs shortest paths.
  • Johnson's algorithm — solves all pairs shortest paths, may be faster than Floyd-Warshall on sparse graphs.
  • Perturbation theory; finds (at worst) the locally shortest path

Shortest path algorithms are applied in an obvious way to automatically find directions between physical locations, such as driving directions on web mapping websites like Mapquest.

If one represents a nondeterministic abstract machine as a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represents the states of a puzzle like a Rubik's Cube and each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves.

In a networking or telecommunications mindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with a widest path problem. e.g.: Shortest (min-delay) widest path or Widest shortest (min-delay) path.

Dijkstra’s algorithms

(From Wikipedia, the free encyclopedia)

Dijkstra's algorithm, named after its discoverer, Dutch computer scientist Edsger Dijkstra, is a greedy algorithm that solves the single-source shortest path problem for a directed graph with non negative edge weights.

For example, if the vertices (nodes) of the graph represent cities and edge weights represent driving distances between pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest route between two cities.

The input of the algorithm consists of a weighted directed graph G and a source vertex s in G. We will denote V the set of all vertices in the graph G. Each edge of the graph is an ordered pair of vertices (u,v) representing a connection from vertex u to vertex v. The set of all edges is denoted E. Weights of edges are given by a weight function w: E → [0, ∞); therefore w(u,v) is the cost of moving directly from vertex u to vertex v. The cost of an edge can be thought of as (a generalization of) the distance between those two vertices. The cost of a path between two vertices is the sum of costs of the edges in that path. For a given pair of vertices s and t in V, the algorithm finds the path from s to t with lowest cost (i.e. the shortest path). It can also be used for finding costs of shortest paths from a single vertex s to all other vertices in the graph.

In the following algorithm, u := extract_min(Q) searches for the vertex u in the vertex set Q that has the least dist[u] value. That vertex is removed from the set Q and returned to the user. length(u, v) calculates the length between the two neighbor-nodes u and v. alt on line 10 is the length of the path from the root node to the neighbor node v if it were to go through u. If this path is shorter than the current shortest path recorded for v, that current path is replaced with this alt path.

1 function Dijkstra(Graph, source):

2 for each vertex v in Graph: // Initializations

3 dist[v] := infinity // Unknown distance function from s to v

4 previous[v] := undefined

5 dist[source] := 0 // Distance from s to s

6 Q := copy(Graph) // Set of all unvisited vertices

7 while Q is not empty: // The main loop

8 u := extract_min(Q) // Remove best vertex from priority queue; returns source on first iteration

9 for each neighbor v of u:

10 alt = dist[u] + length(u, v)

11 if alt < dist[v] // Relax (u,v)

12 dist[v] := alt

13 previous[v] := u

If we are only interested in a shortest path between vertices source and target, we can terminate the search at line 9 if u = target. Now we can read the shortest path from source to target by iteration:

1 S := empty sequence

2 u := target

3 while defined previous[u]

4 insert u at the beginning of S

5 u := previous[u]

Now sequence S is the list of vertices constituting one of the shortest paths from source to target, or the empty sequence if no path exists.

A more general problem would be to find all the shortest paths between source and target (there might be several different ones of the same length). Then instead of storing only a single node in each entry of previous[] we would store all nodes satisfying the relaxation condition. For example, if both r and source connect to target and both of them lie on different shortest paths through target (because the edge cost is the same in both cases), then we would add both r and source to previous[target]. When the algorithm completes, previous[] data structure will actually describe a graph that is a subset of the original graph with some edges removed. Its key property will be that if the algorithm was run with some starting node, then every path from that node to any other node in the new graph will be the shortest path between those nodes in the original graph, and all paths of that length from the original graph will be present in the new graph. Then to actually find all these short paths between two given nodes we would use path finding algorithm on the new graph, such as depth-first search.

The running time of Dijkstra's algorithm on a graph with edges E and vertices V can be expressed as a function of |E| and |V| using the Big-O notation.

The simplest implementation of the Dijkstra's algorithm stores vertices of set Q in an ordinary linked list or array, and operation Extract-Min(Q) is simply a linear search through all vertices in Q. In this case, the running time is O(|V|2+|E|).

For sparse graphs, that is, graphs with many fewer than |V|2 edges, Dijkstra's algorithm can be implemented more efficiently by storing the graph in the form of adjacency lists and using a binary heap, pairing heap, or Fibonacci heap as a priority queue to implement the Extract-Min function efficiently. With a binary heap, the algorithm requires O(( | E | + | V | )log | V | ) time (which is dominated by O( | E | log | V | ) assuming every vertex is connected, i.e., ), and the Fibonacci heap improves this to O( | E | + | V | log | V | ).

The functionality of Dijkstra's original algorithm can be extended with a variety of modifications. For example, sometimes it is desirable to present solutions which are less than mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal solution is first calculated. A single edge appearing in the optimal solution is removed from the graph, and the optimum solution to this new graph is calculated. Each edge of the original solution is suppressed in turn and a new shortest-path calculated. The secondary solutions are then ranked and presented after the first optimal solution.

OSPF (open shortest path first) is a well known real-world implementation of Dijkstra's algorithm used in Internet routing.

Unlike Dijkstra's algorithm, the Bellman-Ford algorithm can be used on graphs with negative edge weights, as long as the graph contains no negative cycle reachable from the source vertex s. (The presence of such cycles means there is no shortest path, since the total weight becomes lower each time the cycle is traversed.)

The A* algorithm is a generalization of Dijkstra's algorithm that cuts down on the size of the subgraph that must be explored, if additional information is available that provides a lower-bound on the "distance" to the target.

Breadth-first search

(From Wikipedia, the free encyclopedia)

Breadth-first search (BFS) is a graph search algorithm that begins at the root node and explores all the neighboring nodes. Then for each of those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal.

BFS is a uninformed search method that aims to expand and examine all nodes of a graph systematically in search of a solution. In other words, it exhaustively searches the entire graph without considering the goal until it finds it. It does not use a heuristic.

From the standpoint of the algorithm, all child nodes obtained by expanding a node are added to a FIFO queue. In typical implementations, nodes that have not yet been examined for their neighbors are placed in some container (such as a queue or linked list) called "open" and then once examined are placed in the container "closed".

Pic.15 Animated example of a breadth-first search
  1. Put the ending node (the root node) in the queue.
  2. Pull a node from the beginning of the queue and examine it.
    • If the searched element is found in this node, quit the search and return a result.
    • Otherwise push all the (so-far-unexamined) successors (the direct child nodes) of this node into the end of the queue, if there are any.
  3. If the queue is empty, every node on the graph has been examined -- quit the search and return "not found".
  4. Repeat from Step 2.

C implementation

Algorithm of Breadth-first search:

void BFS(VLink G[], int v) {

int w;

VISIT(v); /*visit vertex v*/

visited[v] = 1; /*mark v as visited : 1 */

ADDQ(Q,v);

while(!QMPTYQ(Q)) {

v = DELQ(Q); /*Dequeue v*/

w = FIRSTADJ(G,v); /*Find first neighbor, return -1 if no neighbor*/

while(w != -1) {

if(visited[w] == 0) {

VISIT(w); /*visit vertex v*/

ADDQ(Q,w); /*Enqueue current visited vertext w*/

visited[w] = 1; /*mark w as visited*/

}

W = NEXTADJ(G,v); /*Find next neighbor, return -1 if no neighbor*/

}

}

}

Main Algorithm of apply Breadth-first search to graph G=(V,E):

void TRAVEL_BFS(VLink G[], int visited[], int n) {

int i;

for(i = 0; i < n; i ++) {

visited[i] = 0; /* Mark initial value as 0 */

}

for(i = 0; i < n; i ++)

if(visited[i] == 0)

BFS(G,i);

}

C++ implementation

This is the implementation of the above informal algorithm, where the "so-far-unexamined" is handled by the parent array. For actual C++ applications, see the Boost Graph Library.

Suppose we have a struct:

struct Vertex {

...

std::vector<int> out;

...

};

and an array of vertices: (the algorithm will use the indexes of this array, to handle the vertices)

std::vector<Vertex> graph(vertices);

the algorithm starts from start and returns true if there is a directed path from start to end:

bool BFS(const std::vector<Vertex>& graph, int start, int end) {

std::queue<int> next;

std::map<int,int> parent;

parent[start] = -1;

next.push(start);

while (!next.empty()) {

int u = next.front();

next.pop();

// Here is the point where you can examine the u th vertex of graph

// For example:

if (u == end) return true;

for (std::vector<int>::const_iterator j = graph[u].out.begin(); j != graph[u].out.end(); ++j) {

// Look through neighbors.

int v = *j;

if (parent.count(v) == 0) {

// If v is unvisited.

parent[v] = u;

next.push(v);

}

}

}

return false;

}

it also stores the parents of each node, from which you can get the path.

  • Space Complexity

Since all nodes discovered so far have to be saved, the space complexity of breadth-first search is O(|V| + |E|) where |V| is the number of nodes and |E| the number of edges in the graph. Note: another way of saying this is that it is O(BM) where B is the maximum branching factor and M is the maximum path length of the tree. This immense demand for space is the reason why breadth-first search is impractical for larger problems.

  • Time Complexity

Since in the worst case breadth-first search has to consider all paths to all possible nodes the time complexity of breadth-first search is O(|V| + |E|) where |V| is the number of nodes and |E| the number of edges in the graph. The best case of this search is o(1). It occurs when the node is found at first time.

  • Completeness

Breadth-first search is complete. This means that if there is a solution breadth-first search will find it regardless of the kind of graph. However, if the graph is infinite and there is no solution breadth-first search will diverge.

  • Optimality

For unit-step cost, breadth-first search is optimal. In general breadth-first search is not optimal since it always returns the result with the fewest edges between the start node and the goal node. If the graph is a weighted graph, and therefore has costs associated with each step, a goal next to the start does not have to be the cheapest goal available. This problem is solved by improving breadth-first search to uniform-cost search which considers the path costs. Nevertheless, if the graph is not weighted, and therefore all step costs are equal, breadth-first search will find the nearest and the best solution.

Breadth-first search can be used to solve many problems in graph theory, for example:

  • Finding all connected components in a graph.
  • Finding all nodes within one connected component
  • Copying Collection, Cheney's algorithm
  • Finding the shortest path between two nodes u and v (in an unweighted graph)
  • Testing a graph for bipartiteness
  • (Reverse) Cuthill–McKee mesh numbering

Finding connected Components

The set of nodes reached by a BFS are the largest connected component containing the start node.

Testing bipartiteness

BFS can be used to test bipartiteness, by starting the search at any vertex and giving alternating labels to the vertices visited during the search. That is, give label 0 to the starting vertex, 1 to all its neighbours, 0 to those neighbours' neighbours, and so on. If at any step a vertex has (visited) neighbours with the same label as itself, then the graph is not bipartite. If the search ends without such a situation occurring, then the graph is bipartite.

Usage in 2D grids for computer games

BFS has been applied to pathfinding problems in computer games, such as Real-Time Strategy games, where the graph is represented by a tilemap, and each tile in the map represents a node. Each of that node is then connected to each of its neighbour (neighbour in north, north-east, east, south-east, south, south-west, west, and north-west).

It is worth mentioning that when BFS is used in that manner, the neighbour list should be created such that north, east, south and west get priority over north-east, south-east, south-west and north-west. The reason for this is that BFS tends to start searching in a diagonal manner rather than adjacent, and the path found will not be the correct one. BFS should first search adjacent nodes, then diagonal nodes.

7.3.4. Bellman-Ford algorithms

(From Wikipedia, the free encyclopedia)

The Bellman–Ford algorithm computes single-source shortest paths in a weighted digraph (where some of the edge weights may be negative). Dijkstra's algorithm accomplishes the same problem with a lower running time, but requires edge weights to be non-negative. Thus, Bellman–Ford is usually used only when there are negative edge weights.

If a graph contains a cycle of total negative weight then arbitrarily low weights are achievable and so there's no solution; Bellman-Ford detects this case.

Bellman-Ford is in its basic structure very similar to Dijkstra's algorithm, but instead of greedily selecting the minimum-weight node not yet processed to relax, it simply relaxes all the edges, and does this |V| − 1 times, where |V| is the number of vertices in the graph. The repetitions allow minimum distances to accurately propagate throughout the graph, since, in the absence of negative cycles, the shortest path can only visit each node at most once. Unlike the greedy approach, which depends on certain structural assumptions derived from positive weights, this straightforward approach extends to the general case.

Bellman–Ford runs in O(V·E) time, where V and E are the number of vertices and edges respectively.

procedure BellmanFord(list vertices, list edges, vertex source)

// This implementation takes in a graph, represented as lists of vertices

// and edges, and modifies the vertices so that their distance and

// predecessor attributes store the shortest paths.

// Step 1: Initialize graph

for each vertex v in vertices:

if v is source then v.distance := 0

else v.distance := infinity

v.predecessor := null

// Step 2: relax edges repeatedly

for i from 1 to size(vertices)-1:

for each edge uv in edges:

u := uv.source

v := uv.destination // uv is the edge from u to v

if v.distance > u.distance + uv.weight:

v.distance := u.distance + uv.weight

v.predecessor := u

// Step 3: check for negative-weight cycles

for each edge uv in edges:

u := uv.source

v := uv.destination

if v.distance > u.distance + uv.weight:

error "Graph contains a negative-weight cycle"

The correctness of the algorithm can be shown by induction. The precise statement shown by induction is:

Lemma. After i repetitions of for cycle:

  • If Distance(u) is not infinity, it is equal to the length of some path from s to u;
  • If there is a path from s to u with at most i edges, then Distance(u) is at most the length of the shortest path from s to u with at most i edges.

Proof. For the base case of induction, consider i=0 and the moment before for cycle is executed for the first time. Then, for the source vertex, source.distance = 0, which is correct. For other vertices u, u.distance = infinity, which is also correct because there is no path from source to u with 0 edges.

For the inductive case, we first prove the first part. Consider a moment when a vertex's distance is updated by v.distance := u.distance + uv.weight. By inductive assumption, u.distance is the length of some path from source to u. Then u.distance + uv.weight is the length of the path from source to v that follows the path from source to u and then goes to v.

For the second part, consider the shortest path from source to u with at most i edges. Let v be the last vertex before u on this path. Then, the part of the path from source to v is the shortest path from source to v with at most i-1 edges. By inductive assumption, v.distance after i-1 cycles is at most the length of this path. Therefore, uv.weight + v.distance is at most the length of the path from s to u. In the ith cycle, u.distance gets compared with uv.weight + v.distance, and is set equal to it if uv.weight + v.distance was smaller. Therefore, after i cycles, u.distance is at most the length of the shortest path from source to u that uses at most i edges.

When i equals the number of vertices in the graph, each path will be the shortest path overall, unless there are negative-weight cycles. If a negative-weight cycle exists and is accessible from the source, then given any walk, a shorter one exists, so there is no shortest walk. Otherwise, the shortest walk will not include any cycles (because going around a cycle would make the walk shorter), so each shortest path visits each vertex at most once, and its number of edges is less than the number of vertices in the graph.

A distributed variant of Bellman–Ford algorithm is used in distance-vector routing protocols, for example the Routing Information Protocol (RIP). The algorithm is distributed because it involves a number of nodes (routers) within an Autonomous system, a collection of IP networks typically owned by an ISP. It consists of the following steps:

  1. Each node calculates the distances between itself and all other nodes within the AS and stores this information as a table.
  2. Each node sends its table to all neighboring nodes.
  3. When a node receives distance tables from
0