24/05/2018, 14:57

Stack and Queue

(From Wikipedia, the free encyclopedia) StackIn computer science, a stack is a temporary abstract data type and data structure based on the principle of Last In First Out (LIFO). Stacks are used extensively at every ...

(From Wikipedia, the free encyclopedia)

StackIn computer science, a stack is a temporary abstract data type and data structure based on the principle of Last In First Out (LIFO). Stacks are used extensively at every level of a modern computer system. For example, a modern PC uses stacks at the architecture level, which are used in the basic design of an operating system for interrupt handling and operating system function calls. Among other uses, stacks are used to run a Java Virtual Machine, and the Java language itself has a class called "Stack", which can be used by the programmer. The stack is ubiquitous.

A stack-based computer system is one that stores temporary information primarily in stacks, rather than hardware CPU registers (a register-based computer system).

Abstract data type

(From Wikipedia, the free encyclopedia)

As an abstract data type, the stack is a container of nodes and has two basic operations: push and pop. Push adds a given node to the top of the stack leaving previous nodes below. Pop removes and returns the current top node of the stack. A frequently used metaphor is the idea of a stack of plates in a spring loaded cafeteria stack. In such a stack, only the top plate is visible and accessible to the user, all other plates remain hidden. As new plates are added, each new plate becomes the top of the stack, hiding each plate below, pushing the stack of plates down. As the top plate is removed from the stack, they can be used, the plates pop back up, and second plate becomes the top of the stack. Two important principles are illustrated by this metaphor, the Last In First Out principle is one. The second is that the contents of the stack are hidden. Only the top plate is visible, so to see what is on the third plate, the first and second plates will have to be removed.

Operations

In modern computer languages, the stack is usually implemented with more operations than just "push" and "pop". The length of a stack can often be returned as a parameter. Another helper operation top (also known as peek and peak) can return the current top element of the stack without removing it from the stack.

This section gives pseudocode for adding or removing nodes from a stack, as well as the length and top functions. Throughout we will use null to refer to an end-of-list marker or sentinel value, which may be implemented in a number of ways using pointers.

record Node {

data // The data being stored in the node

next // A reference to the next node; null for last node

}

record Stack {

Node stackPointer // points to the 'top' node; null for an empty stack

}

function push(Stack stack, Element element) { // push element onto stack

new(newNode) // Allocate memory to hold new node

newNode.data := element

newNode.next := stack.stackPointer

stack.stackPointer := newNode

}

function pop(Stack stack) {// increase the stack pointer and return 'top' node

// You could check if stack.stackPointer is null here.

// If so, you may wish to error, citing the stack underflow.

node := stack.stackPointer

stack.stackPointer := node.next

element := node.data

return element

}

function top(Stack stack) { // return 'top' node

return stack.stackPointer.data

}

function length(Stack stack) { // return the amount of nodes in the stack

length := 0

node := stack.stackPointer

while node not null {

length := length + 1

node := node.next

}

return length

}

As you can see, these functions pass the stack and the data elements as parameters and return values, not the data nodes that, in this implementation, include pointers. A stack may also be implemented as a linear section of memory (i.e. an array), in which case the function headers would not change, just the internals of the functions.

Implementation

A typical storage requirement for a stack of n elements is O(n). The typical time requirement of O(1) operations is also easy to satisfy with a dynamic array or (singly) linked list implementation.

C++'s Standard Template Library provides a "stack" templated class which is restricted to only push/pop operations. Java's library contains a Stack class that is a specialization of Vector. This could be considered a design flaw because the inherited get() method from Vector ignores the LIFO constraint of the Stack.

Here is a simple example of a stack with the operations described above in Python. It does not have any type of error checking.

class Stack:

def __init__(self):

self.stack_pointer = None

def push(self, element):

self.stack_pointer = Node(element, self.stack_pointer)

def pop(self):

(e, self.stack_pointer) = (self.stack_pointer.element, self.stack_pointer.next)

return e

def peek(self):

return self.stack_pointer.element

def __len__(self):

i = 0

sp = self.stack_pointer

while sp:

i += 1

sp = sp.next

return i

class Node:

def __init__(self, element=None, next=None):

self.element = element

self.next = next

if __name__ == '__main__':

# small use example

s = Stack()

[s.push(i) for i in xrange(10)]

print [s.pop() for i in xrange(len(s))]

The above is admittedly redundant as Python supports the 'pop' and 'append' functions to lists.

Application

(From Wikipedia, the free encyclopedia)

Stacks are ubiquitous in the computing world.

Expression evaluation and syntax parsing

Calculators employing reverse Polish notation use a stack structure to hold values. Expressions can be represented in prefix, postfix or infix notations. Conversion from one form of the expression to another form needs a stack. Many compilers use a stack for parsing the syntax of expressions, program blocks etc. before translating into low level code. Most of the programming languages are context-free languages allowing them to be parsed with stack based machines.

For example, The calculation: ((1 + 2) * 4) + 3 can be written down like this in postfix notation with the advantage of no precedence rules and parentheses needed:

1 2 + 4 * 3 +

The expression is evaluated from the left to right using a stack:

  • push when encountering an operand and
  • pop two operands and evaluate the value when encountering an operation.
  • push the result

Like the following way (the Stack is displayed after Operation has taken place):

Input Operation Stack
1 Push operand 1
2 Push operand 1, 2
+ Add 3
4 Push operand 3, 4
* Multiply 12
3 Push operand 12, 3
+ Add 15

The final result, 15, lies on the top of the stack at the end of the calculation. Example : implementation in pascal language. Using marked sequential file as data archives.

programmer : clx321

file : stack.pas

unit : Pstack.tpu

}

program TestStack;

{this program use ADT of Stack, i will assume that the unit of ADT of Stack has already existed}

uses

PStack; {ADT of STACK}

{dictionary}

const

mark = '.';

var

data : stack;

f : text;

cc : char;

ccInt, cc1, cc2 : integer;

{functions}

IsOperand (cc : char) : boolean; {JUST Prototype}

{return TRUE if cc is operand}

ChrToInt (cc : char) : integer; {JUST Prototype}

{change char to integer}

Operator (cc1, cc2 : integer) : integer; {JUST Prototype}

{operate two operands}

{algorithms}

begin

assign (f, cc);

reset (f);

read (f, cc); {first elmt}

if (cc = mark) then

begin

writeln ('empty archives !');

end

else

begin

repeat

if (IsOperand (cc)) then

begin

ccInt := ChrToInt (cc);

push (ccInt, data);

end

else

begin

pop (cc1, data);

pop (cc2, data);

push (data, Operator (cc2, cc1));

end;

read (f, cc); {next elmt}

until (cc = mark);

end;

close (f);

end.

Runtime memory management

A number of programming languages are stack-oriented, meaning they define most basic operations (adding two numbers, printing a character) as taking their arguments from the stack, and placing any return values back on the stack. For example, PostScript has a return stack and an operand stack, and also has a graphics state stack and a dictionary stack.

Forth uses two stacks, one for argument passing and one for subroutine return addresses. The use of a return stack is extremely commonplace, but the somewhat unusual use of an argument stack for a human-readable programming language is the reason Forth is referred to as a stack-based language.

Many virtual machines are also stack-oriented, including the p-code machine and the Java virtual machine.

Almost all computer runtime memory environments use a special stack (the "call stack") to hold information about procedure/function calling and nesting in order to switch to the context of the called function and restore to the caller function when the calling finishes. They follow a runtime protocol between caller and callee to save arguments and return value on the stack. Stacks are an important way of supporting nested or recursive function calls. This type of stack is used implicitly by the compiler to support CALL and RETURN statements (or their equivalents) and is not manipulated directly by the programmer.

Some programming languages use the stack to store data that is local to a procedure. Space for local data items is allocated from the stack when the procedure is entered, and is deallocated when the procedure exits. The C programming language is typically implemented in this way. Using the same stack for both data and procedure calls has important security implications (see below) of which a programmer must be aware in order to avoid introducing serious security bugs into a program.

Solving search problems

Solving a search problem, regardless of whether the approach is exhaustive or optimal, needs stack space. Examples of exhaustive search methods are bruteforce and backtracking. Examples of optimal search exploring methods are branch and bound and heuristic solutions. All of these algorithms use stacks to remember the search nodes that have been noticed but not explored yet. The only alternative to using a stack is to use recursion and let the compiler do the remembering for you (but in this case the compiler is still using a stack internally). The use of stacks is prevalent in many problems, ranging from simple in-order traversals of trees or depth-first traversals of graphs to a crossword puzzle solver or computer chess game. Some of these problems can be solved by alternative data structures like a queue, when a different order of traversal is required.

Security

Some computing environments use stacks in ways that may make them vulnerable to security breaches and attacks. Programmers working in such environments must take special care to avoid the pitfalls of these implementations.

For example, some programming languages use a common stack to store both data local to a called procedure and the linking information that allows the procedure to return to its caller. This means that the program moves data into and out of the same stack that contains critical return addresses for the procedure calls. If data is moved to the wrong location on the stack, or an oversized data item is moved to a stack location that is not large enough to contain it, return information for procedure calls may be corrupted, causing the program to fail.

Malicious parties may attempt to take advantage of this type of implementation by providing oversized data input to a program that does not check the length of input. Such a program may copy the data in its entirety to a location on the stack, and in so doing it may change the return addresses for procedures that have called it. An attacker can experiment to find a specific type of data that can be provided to such a program such that the return address of the current procedure is reset to point to an area within the stack itself (and within the data provided by the attacker), which in turn contains instructions that carry out unauthorized operations.

This type of attack is a variation on the buffer overflow attack and is an extremely frequent source of security breaches in software, mainly because some of the most popular programming languages (such as C) use a shared stack for both data and procedure calls, and do not verify the length of data items. Frequently programmers do not write code to verify the size of data items, either, and when an oversized or undersized data item is copied to the stack, a security breach may occur.

(From Wikipedia, the free encyclopedia)

A queue is a particular kind of collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position and removal of entities from the front terminal position. Queues provide services in computer science, transport and operations research where various entities such as data, objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the function of a buffer.

Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes.

The most well known operation of the queue is the First-In-First-Out (FIFO) queue process. In a FIFO queue, the first element added to in the queue will be the first one out. This is equivalent to the requirement that whenever an element is added, all elements that were added before have to be removed before the new element can be invoked. Unless otherwise specified, the remainder of the article will refer to FIFO queues. There are also non-FIFO queue data structures, like priority queues.

Performance

A straightforward analysis shows that for both these cases, the time needed to add or delete an item is constant and independent of the number of items in the queue. Thus we class both addition and deletion as an O(1) operation. For any given real machine+operating system+language combination, addition may take c1 seconds and deletion c2 seconds, but we aren't interested in the value of the constant, it will vary from machine to machine, language to language, etc. The key point is that the time is not dependent on n - producing O(1) algorithms.

Once we have written an O(1) method, there is generally little more that we can do from an algorithmic point of view. Occasionally, a better approach may produce a lower constant time. Often, enhancing our compiler, run-time system, machine, etc will produce some significant improvement. However O(1) methods are already very fast, and it's unlikely that effort expended in improving such a method will produce much real gain!

Basic operations

There are two basic operations associated with a queue: enqueue and dequeue. Enqueue means adding a new item to the rear of the queue while dequeue refers to removing the front item from the queue and returning it to the calling entity.

Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again.

A practical implementation of a queue e.g. with pointers of course does have some capacity limit, that depends on the concrete situation it is used in. For a data structure the executing computer will eventually run out of memory, thus limiting the queue size. Queue overflow results from trying to add an element onto a full queue and queue underflow happens when trying to remove an element from an empty queue.

A bounded queue is a queue limited to a fixed number of items.

FIFO queue is a queue in which the first item added is always the first one out.

LIFO queue is a queue in which the item most recently added is always the first one out.

Priority queue is a queue in which the items are sorted so that the highest priority item is always the next one to be extracted.

(From Wikipedia, the free encyclopedia)

A priority queue is an abstract data type in computer programming, supporting the following three operations:

  • add an element to the queue with an associated priority
  • remove the element from the queue that has the highest priority, and return it
  • (optionally) peek at the element with highest priority without removing it

The simplest way to implement a priority queue data type is to keep an associative array mapping each priority to a list of elements with that priority. If association lists are used to implement the associative array, adding an element takes constant time but removing or peeking at the element of highest priority takes linear (O(n)) time, because we must search all keys for the largest one. If a self-balancing binary search tree is used, all three operations take O(log n) time; this is a popular solution in environments that already provide balanced trees but nothing more sophisticated. The van Emde Boas tree, another associative array data structure, can perform all three operations in O(log log n) time, but at a space cost for small queues of about O(2m/2), where m is the number of bits in the priority value, which may be prohibitive.

There are a number of specialized heap data structures that either supply additional operations or outperform the above approaches. The binary heap uses O(log n) time for both operations, but allows peeking at the element of highest priority without removing it in constant time. Binomial heaps add several more operations, but require O(log n) time for peeking. Fibonacci heaps can insert elements, peek at the maximum priority element, and increase an element's priority in amortized constant time (deletions are still O(log n)).

The Standard Template Library (STL), part of the C++ 1998 standard, specifies "priority_queue" as one of the STL container adaptor class templates. Unlike actual STL containers, it does not allow iteration of its elements (it strictly adheres to its abstract data type definition). Java's library contains a PriorityQueue class.

Bandawidth management

Priority queuing can be used to manage limited resources such as bandawidth on a transmission line from a network router. In the event of outgoing traffic queuing due to insufficient bandawidth, all other queues can be halted to send the traffic from the highest priority queue upon arrival. This ensures that the prioritized traffic (such as real-time traffic, e.g. a RTP stream of a VoIP connection) is forwarded with the least delay and the least likelihood of being rejected due to a queue reaching its maximum capacity. All other traffic can be handled when the highest priority queue is empty. Another approach used is to send disproportionately more traffic from higher priority queues.

Usually a limitation (policer) is set to limit the bandawidth that traffic from the highest priority queue can take, in order to prevent high priority packets from choking off all other traffic. This limit is usually never reached due to high lever control instances such as the Cisco Callmanager, which can be programmed to inhibit calls which would exceed the programmed bandawidth limit.

Discrete event simulation

Another use of a priority queue is to manage the events in a discrete event simulation. The events are added to the queue with their simulation time used as the priority. The execution of the simulation proceeds by repeatedly pulling the top of the queue and executing the event thereon.

See also: Scheduling (computing), queueing theory

A* search algorithm

The A* search algorithm finds the shortest path between two vertices of a weighted graph, trying out the most promising routes first. The priority queue is used to keep track of unexplored routes; the one for which a lower bound on the total path length is smallest is given highest priority.

While relying on heapsort is a common way to implement priority queues, for integer data faster implementations exist.

  • When the set of keys is {1, 2, ..., C}, a data structure by Emde Boas supports the minimum, maximum, insert, delete, search, extract-min, extract-max, predecessor and successor operations in O(log C) time.

An algorithm by Fredman and Willard implements the minimum operation in O(1) time and insert and extract-min operations in time.

0