What is the difference between recursive backtracking and stack based backtracking




















I know this is hard to follow. I've done it via macros in LISP, and it works well. The idea is you're doing this to perform some kind of depth-first tree search. But you could do it instead as a series of "stabs" down from the root of the tree, following a different path each time you "stab". This might be objected to on performance grounds, but it doesn't actually cost that much more, since the bulk of the work happens in the leaves of the tree. Remember that real backtrack can involve quite a bit of machinery, making lambdas and so on.

I've used this code to build a home-grown theorem prover in place of the search routine. Recursion is like a bottom-up process. You can solve the problem just by using the result of the sub-problem. For example, reverse LinkedList using recursion is just appending a head node on the already reversed sublist. Backtracking is still like a top-down process.

Sometimes you can't solve the problem just by using the result of the sub-problem, you need to pass the information you already got to the sub-problems. The answer s to this problem will be computed at the lowest level, and then these answer s will be passed back to the problem with the information you got along the way.

Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more.

Difference between backtracking and recursion? Ask Question. Asked 7 years ago. Active 1 year, 2 months ago. Viewed 32k times. What is the difference between backtracking and recursion? How is this program working? Improve this question. I think you'd better make your question a bit more clear. Add a comment. Active Oldest Votes. In recursion function calls itself until reaches a base case. Can be a bit hard to understand, I attach some text from here : "Conceptually, you start at the root of a tree; the tree probably has some good leaves and some bad leaves, though it may be that the leaves are all good or all bad.

In chess, for example, a node can represent one arrangement of pieces on a chessboard, and each child of that node can represent the arrangement after some piece has made a legal move. How do you find these children, and how do you keep track of which ones you've already examined?

The most straightforward way to keep track of which children of the node have been tried is as follows: Upon initial entry to the node that is, when you first get there from above , make a list of all its children. As you try each child, take it off the list. When the list is empty, there are no remaining untried children, and you can return "failure. There is an easier way to keep track of which children have been tried, if you can define an ordering on the children. If there is an ordering, and you know which child you just tried, you can determine which child to try next.

For example, you might be able to number the children 1 through n , and try them in numerical order. Or, if you are trying to color a map with just four colors, you can always try red first, then yellow, then green, then blue. If child yellow fails, you know to try child green next. If you are searching a maze, you can try choices in the order left, straight, right or perhaps north, east, south, west.

It isn't always easy to find a simple way to order the children of a node. In the chess game example, you might number your pieces or perhaps the squares of the board and try them in numerical order; but in addition each piece may also have several moves, and these must also be ordered.

You can probably find some way to order the children of a node. If the ordering scheme is simple enough, you should use it; but if it is too cumbersome, you are better off keeping a list of untried children. For starters, let's do the simplest possible example of backtracking, which is searching an actual tree.

We will also use the simplest kind of tree, a binary tree. A binary tree is a data structure composed of nodes. One node is designated as the root node. Each node can reference point to zero, one, or two other nodes, which are called its children. All nodes are reachable by one or more steps from the root node, and there are no cycles. For our purposes, although this is not part of the definition of a binary tree, we will say that a node might or might not be a goal node, and will contain its name.

The first example in this paper which we repeat here shows a binary tree. Next we will create a TreeSearch class, and in it we will define a method makeTree which constructs the above binary tree. And finally, here's the recursive backtracking routine to "solve" the binary tree by finding a goal node.

Each time we ask for another node, we have to check if it is null. In the above we put that check as the first thing in solvable. An alternative would be to check first whether each child exists, and recur only if they do.

Here's that alternative version:. One of the things that simplifies the above binary tree search is that, at each choice point, you can ignore all the previous choices. Previous choices don't give you any information about what you should do next; as far as you know, both the left and the right child are possible solutions. In many problems, however, you may be able to eliminate children immediately, without recursion.

Consider, for example, the problem of four-coloring a map. It is a theorem of mathematics that any map on a plane, no matter how convoluted the countries are, can be colored with at most four colors, so that no two countries that share a border are the same color.

To color a map, you choose a color for the first country, then a color for the second country, and so on, until all countries are colored. There are two ways to do this:. Let's apply each of these two methods to the problem of coloring a checkerboard. This should be easily solvable; after all, a checkerboard only needs two colors. We define the following helper methods. The helper method code isn't displayed here because it's not important for understanding the method that does the backtracking.

Those appear pretty similar, and you might think they are equally good. However, the timing information suggests otherwise:. The zeros in the above table indicate times too short to measure less than 1 millisecond. Why this huge difference? Either of these methods could have exponential growth. Eliminating a node automatically eliminates all of its descendents, and this will often prevent exponential growth. Conversely, by waiting to check until a leaf node is reached, exponential growth is practically guaranteed.

If there is any way to eliminate children reduce the set of choices , do so! Often our first try at a program doesn't work, and we need to debug it. Debuggers are helpful, but sometimes we need to fall back on inserting print statements. There are some simple tricks to making effective use of print statements.

These tricks can be applied to any program, but are especially useful when you are trying to debug recursive routines. As an example consider the factorial function: n! The definition of recursive factorial looks like this:.

This definition can easily be converted to recursive implementation. Here the problem is determining the value of n! In the recursive case, when n is greater than 1, the function calls itself to determine the value of n — l! In the base case, when n is 0 or 1, the function simply returns 1. This looks like the following:. Recursion and Memory Visualization Each recursive call makes a new copy of that method actually only the variables in memory.

Once a method ends that is, returns some data , the copy of that returning method is removed from memory. The recursive solutions look simple but visualization and tracing take time. For better understanding, let us consider the following example.

Now, let us consider our factorial function. Recursion versus Iteration While discussing recursion, the basic question that comes to mind is: which way is better? The answer to this question depends on what we are trying to do. A recursive approach mirrors the problem that we are trying to solve.

A recursive approach makes it simpler to solve a problem that may not have the most obvious of answers. But, recursion adds overhead for each recursive call needs space on the stack frame. In this chapter, we cover a few problems with recursion and we will discuss the rest in other chapters. By the time you complete reading the entire book, you will encounter many recursion problems. Solution: The Towers of Hanoi is a mathematical puzzle.

It consists of three rods or pegs or towers , and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks on one rod in ascending order of size, the smallest at the top, thus making a conical shape. The objective of the puzzle is to move the entire stack to another rod, satisfying the following rules:. No disk may be placed on top of a smaller disk. Time Complexity: O n.

Space Complexity: O n for recursive stack space. Backtracking is an improvement of the brute force approach. It systematically searches for a solution to a problem among all available options. In backtracking, we start with one possible option out of many available options and try to solve the problem if we are able to solve the problem with the selected move then we will print the solution else we will backtrack and select some other option and try to solve it.

If none of the options work out we will claim that there is no solution for the problem. Backtracking is a form of recursion.



0コメント

  • 1000 / 1000