Introduction to Algorithm
An algorithm is a fundamental concept in computer science that refers to a step-by-step procedure used to solve a given problem or perform a specific task. Each step in an algorithm is logically defined and leads toward the solution of the problem.
An algorithm begins with an input (which may be zero or more values), processes the input through a sequence of well-defined instructions, and finally produces the required output. It is important to understand that an algorithm is not a program or code, but a logical solution design that can be implemented in any programming language.
In simple terms, an algorithm is a finite sequence of instructions that solves a problem in a systematic way.
Definition of Algorithm
An algorithm is defined as a finite sequence of well-defined and unambiguous instructions that accepts input, processes it through logical steps, and produces the required output to solve a specific problem.
The important aspect of an algorithm is that it must always:
- Start with a clear purpose
- Follow logical steps
- Terminate after completing the task
In simple words, An algorithm is a step-by-step method to solve a problem in a finite amount of time.
Properties / Characteristics of an Algorithm
For an algorithm to be correct and useful, it must satisfy certain important properties. These properties ensure that the algorithm is practical, reliable, and executable.
1. Input and Output
An algorithm must accept zero or more inputs depending on the problem requirement and must produce at least one output. The input represents the data given to the algorithm, while the output represents the final result obtained after processing the input.
For example, in a program that adds two numbers, the input is the two numbers and the output is their sum.
This property ensures that the algorithm is useful and goal-oriented.
- Can take zero or more inputs
- Must produce at least one output
- Defines what goes in and what comes out
2. Finiteness
An algorithm must always terminate after a finite number of steps. It should not run indefinitely. This ensures that the algorithm provides a solution in a reasonable amount of time.
If an algorithm does not stop, it is considered incorrect because it will never produce a final result.
This ensures that the solution is practical and computable.
Key points:
- Must end after limited steps
- Infinite loops make algorithm invalid
- Guarantees completion of task
3. Definiteness
Each step of an algorithm must be clearly defined, precise, and unambiguous. There should be no confusion in interpreting any instruction, and each step must have only one meaning.
If a step is unclear, different people or systems may interpret it differently, leading to incorrect results.
This ensures clarity and precision in problem-solving.
- Each step is clear and precise
- No confusion or ambiguity
- Same input always produces same behavior
4. Effectiveness
An algorithm must consist of basic and feasible operations that can be performed in real time. Each step should be simple enough to be executed manually or by a computer.
Complex or impractical steps reduce the usefulness of an algorithm.
This ensures that the algorithm is practically implementable.
- Steps must be simple
- Must be executable in real systems
- Should not include unrealistic instructions
5. Generosity (Generality)
An algorithm should be designed in such a way that it can solve a class of problems rather than just a single specific problem. It should work for all valid inputs of the problem.
This makes the algorithm reusable and applicable to a wide range of inputs.
This ensures flexibility and reusability.
- Works for multiple inputs
- Not restricted to one case
- Applicable to general problems
Categories of Algorithm
Algorithms can be classified based on the type of control structure used in their steps. The three main categories are Sequence, Selection, and Iteration.
1. Sequence
A sequence algorithm is the simplest type of algorithm in which instructions are executed one after another in a linear order. There is no decision-making or repetition involved in this type of algorithm. Each step is executed exactly once in a predefined order.
In a sequence algorithm:
- Steps are executed from top to bottom
- No decision-making is involved
- No step is skipped or repeated
Example: Adding Two Numbers
Step 1: Start
Step 2: Read a, b
Step 3: sum = a + b
Step 4: Print sum
Step 5: Stop 2. Selection
Selection refers to the type of algorithm in which the execution depends on a condition. The algorithm makes a decision based on whether a given condition is true or false, and accordingly selects one of the possible paths of execution.
This introduces branching in the flow of execution.
In selection:
- A condition is evaluated
- Based on result, different steps are executed
- Only one path is followed
Structure
if(condition)
Statement-1
else
Statement-2Example: Even or Odd Number
Step 1: Start
Step 2: Read n
Step 3: If n % 2 == 0
Print "Even"
else
Print "Odd"
Step 4: Stop
3. Iteration (Looping)
Iteration refers to the repeated execution of a set of instructions until a specific condition is satisfied. It is used when a task needs to be performed multiple times. Instead of writing the same steps repeatedly, loops are used to automate repetition.
Instead of writing the same steps repeatedly, loops are used to automate repetition.
In iteration:
- A block of instructions repeats
- Condition controls repetition
- Loop stops when condition becomes false
Example: Sum of Digits
Step 1: Start
Step 2: Read n
Step 3: Repeat until n > 0
(a) r = n % 10
(b) s = s + r
(c) n = n / 10
Step 4: Print s
Step 5: Stop An algorithm is a systematic and logical method for solving problems. It must be finite, precise, and effective. Algorithms form the foundation of computer programming, as every program is developed based on a well-designed algorithm.
The three main categories of algorithms sequence, selection, and iteration help in structuring the flow of instructions and solving problems efficiently.