This slide is prepared By these following Students of Dept. of CSE JnU, Dhaka. Thanks To: Nusrat Jahan, Arifatun Nesa, Fatema Akter, Maleka Khatun, Tamanna Tabassum.
The document discusses lexical analysis in compilers. It describes how the lexical analyzer reads source code characters and divides them into tokens. Regular expressions are used to specify patterns for token recognition. The lexical analyzer generates a finite state automaton to recognize these patterns. Lexical analysis is the first phase of compilation that separates the input into tokens for the parser.
This document discusses operator precedence parsing. It describes operator grammars that can be parsed efficiently using an operator precedence parser. It explains how precedence relations are defined between terminal symbols and how these relations are used during the shift-reduce parsing process to determine whether to shift or reduce at each step. It also addresses handling unary minus operators and recovering from shift/reduce errors during parsing.
The document discusses symbol tables, which are data structures used by compilers to track semantic information about identifiers, variables, functions, classes, etc. It provides details on:
- How various compiler phases like lexical analysis, syntax analysis, semantic analysis, code generation utilize and update the symbol table.
- Common data structures used to implement symbol tables like linear lists, hash tables and how they work.
- The information typically stored for different symbols like name, type, scope, memory location etc.
- Organization of symbol tables for block-structured vs non-block structured languages, including using multiple nested tables vs a single global table.
The document discusses the role and process of a lexical analyzer in compiler design. A lexical analyzer groups input characters into lexemes and produces a sequence of tokens as output for the syntactic analyzer. It strips out comments and whitespace, correlates line numbers with errors, and interacts with the symbol table. Lexical analysis improves compiler efficiency, portability, and allows for simpler parser design by separating lexical and syntactic analysis.
This document defines and provides examples of graphs and their representations. It discusses:
- Graphs are data structures consisting of nodes and edges connecting nodes.
- Examples of directed and undirected graphs are given.
- Graphs can be represented using adjacency matrices or adjacency lists. Adjacency matrices store connections in a grid and adjacency lists store connections as linked lists.
- Key graph terms are defined such as vertices, edges, paths, and degrees. Properties like connectivity and completeness are also discussed.
The document discusses code optimization techniques in compilers. It covers the following key points:
1. Code optimization aims to improve code performance by replacing high-level constructs with more efficient low-level code while preserving program semantics. It occurs at various compiler phases like source code, intermediate code, and target code.
2. Common optimization techniques include constant folding, propagation, algebraic simplification, strength reduction, copy propagation, and dead code elimination. Control and data flow analysis are required to perform many optimizations.
3. Optimizations can be local within basic blocks, global across blocks, or inter-procedural across procedures. Representations like flow graphs, basic blocks, and DAGs are used to apply optimizations at
The document discusses lexical analysis in compilers. It describes how the lexical analyzer reads source code characters and divides them into tokens. Regular expressions are used to specify patterns for token recognition. The lexical analyzer generates a finite state automaton to recognize these patterns. Lexical analysis is the first phase of compilation that separates the input into tokens for the parser.
This document discusses operator precedence parsing. It describes operator grammars that can be parsed efficiently using an operator precedence parser. It explains how precedence relations are defined between terminal symbols and how these relations are used during the shift-reduce parsing process to determine whether to shift or reduce at each step. It also addresses handling unary minus operators and recovering from shift/reduce errors during parsing.
The document discusses symbol tables, which are data structures used by compilers to track semantic information about identifiers, variables, functions, classes, etc. It provides details on:
- How various compiler phases like lexical analysis, syntax analysis, semantic analysis, code generation utilize and update the symbol table.
- Common data structures used to implement symbol tables like linear lists, hash tables and how they work.
- The information typically stored for different symbols like name, type, scope, memory location etc.
- Organization of symbol tables for block-structured vs non-block structured languages, including using multiple nested tables vs a single global table.
The document discusses the role and process of a lexical analyzer in compiler design. A lexical analyzer groups input characters into lexemes and produces a sequence of tokens as output for the syntactic analyzer. It strips out comments and whitespace, correlates line numbers with errors, and interacts with the symbol table. Lexical analysis improves compiler efficiency, portability, and allows for simpler parser design by separating lexical and syntactic analysis.
This document defines and provides examples of graphs and their representations. It discusses:
- Graphs are data structures consisting of nodes and edges connecting nodes.
- Examples of directed and undirected graphs are given.
- Graphs can be represented using adjacency matrices or adjacency lists. Adjacency matrices store connections in a grid and adjacency lists store connections as linked lists.
- Key graph terms are defined such as vertices, edges, paths, and degrees. Properties like connectivity and completeness are also discussed.
The document discusses code optimization techniques in compilers. It covers the following key points:
1. Code optimization aims to improve code performance by replacing high-level constructs with more efficient low-level code while preserving program semantics. It occurs at various compiler phases like source code, intermediate code, and target code.
2. Common optimization techniques include constant folding, propagation, algebraic simplification, strength reduction, copy propagation, and dead code elimination. Control and data flow analysis are required to perform many optimizations.
3. Optimizations can be local within basic blocks, global across blocks, or inter-procedural across procedures. Representations like flow graphs, basic blocks, and DAGs are used to apply optimizations at
This document discusses recursive descent parsing, which is a top-down parsing method that uses a set of recursive procedures to analyze the syntax of a program. Each nonterminal in a grammar is associated with a procedure. It attempts to construct a parse tree starting from the root node and creating child nodes in a preorder traversal. Recursive descent parsing can involve backtracking if the initial parsing path fails. An example grammar and parsing procedures using backtracking are provided to illustrate the technique.
Depth-first search (DFS) is an algorithm that explores all the vertices reachable from a starting vertex by traversing edges in a depth-first manner. DFS uses a stack data structure to keep track of vertices to visit. It colors vertices white, gray, and black to indicate their status. DFS runs in O(V+E) time and can be used for applications like topological sorting and finding strongly connected components. The edges discovered during DFS can be classified as tree, back, forward, or cross edges based on the order in which vertices are discovered.
Breadth First Search & Depth First SearchKevin Jadiya
The slides attached here describes how Breadth first search and Depth First Search technique is used in Traversing a graph/tree with Algorithm and simple code snippet.
This document discusses compiler design and how compilers work. It begins with prerequisites and definitions of compilers and their origins. It then describes the architecture of compilers, including lexical analysis, parsing, semantic analysis, code optimization, and code generation. It explains how compilers translate high-level code into machine-executable code. In conclusions, it summarizes that compilers translate code without changing meaning and aim to make code efficient. References for further reading on compiler design principles are also provided.
This document discusses single pass assemblers. It notes that single pass assemblers scan a program once to create the equivalent binary, substituting symbolic instructions with machine code. However, this can cause forward reference problems when symbols are used before being defined. The document describes two solutions for single pass assemblers: 1) eliminating forward references by defining all labels before use or prohibiting forward data references, and 2) generating object code directly in memory without writing to disk, requiring reassembly each time.
Syntax-Directed Translation into Three Address Codesanchi29
The document discusses syntax-directed translation of code into three-address code. It defines semantic rules for generating three-address code for expressions, boolean expressions, and control flow statements. Temporary variables are generated for subexpressions and intermediate values. The semantic rules specify generating three-address code statements using temporary variables. Backpatching is also discussed as a technique to replace symbolic names in goto statements with actual addresses after code generation.
The document discusses compilers and their role in translating high-level programming languages into machine-readable code. It notes that compilers perform several key functions: lexical analysis, syntax analysis, generation of an intermediate representation, optimization of the intermediate code, and finally generation of assembly or machine code. The compiler allows programmers to write code in a high-level language that is easier for humans while still producing efficient low-level code that computers can execute.
1) A semaphore consists of a counter, a waiting list, and wait() and signal() methods. Wait() decrements the counter and blocks if it becomes negative, while signal() increments the counter and resumes a blocked process if the counter becomes positive.
2) The dining philosophers problem is solved using semaphores to lock access to shared chopsticks, with one philosopher designated as a "weirdo" to avoid deadlock by acquiring locks in a different order.
3) The producer-consumer problem uses three semaphores - one to limit buffer size, one for empty slots, and one for locks - to coordinate producers adding to a bounded buffer
This produced by straight forward compiling algorithms made to run faster or less space or both. This improvement is achieved by program transformations that are traditionally called optimizations.compiler that apply-code improving transformation are called optimizing compilers.
Problem reduction AND OR GRAPH & AO* algorithm.pptarunsingh660
This document summarizes AND-OR graphs and the AO* algorithm. It defines AND-OR graphs as being useful for representing problems that can be solved by decomposing them into smaller subproblems. It then outlines the basic AND-OR graph algorithm involving initializing a graph, expanding nodes, and computing f' values for successor nodes. Finally, it describes the key aspects of the AO* algorithm, which uses AND-OR graphs to represent problems that can be divided into parts that can be combined. The AO* algorithm involves initializing a graph with the initial node, expanding nodes, generating successors, computing h' values, and propagating decision information back through the graph.
1) LR(0) parsers use left-to-right, rightmost derivations with 0-token lookahead to parse context-free grammars deterministically.
2) The states of the LR(0) automaton are sets of parsing items that indicate the progress of recognizing productions.
3) Parsing tables are constructed from the automaton to specify the shift and reduce actions and state transitions based on the next input symbol.
The document discusses three address code, which is an intermediate code used by optimizing compilers. Three address code breaks expressions down into separate instructions that use at most three operands. Each instruction performs an assignment or binary operation on the operands. The code is implemented using quadruple, triple, or indirect triple representations. Quadruple representation stores each instruction in four fields for the operator, two operands, and result. Triple avoids temporaries by making two instructions. Indirect triple uses pointers to freely reorder subexpressions.
Problem Characteristics in Artificial Intelligence,
Unit -2 Problem Solving and Searching Techniques
o choose an appropriate method for a particular problem first we need to categorize the problem based on the following characteristics.
Is the problem decomposable into small sub-problems which are easy to solve?
Can solution steps be ignored or undone?
Is the universe of the problem is predictable?
Is a good solution to the problem is absolute or relative?
Is the solution to the problem a state or a path?
What is the role of knowledge in solving a problem using artificial intelligence?
Does the task of solving a problem require human interaction?
1. Is the problem decomposable into small sub-problems which are easy to solve?
Can the problem be broken down into smaller problems to be solved independently?
See also Water Jug Problem in Artificial Intelligence
The decomposable problem can be solved easily.
Example: In this case, the problem is divided into smaller problems. The smaller problems are solved independently. Finally, the result is merged to get the final result.
Is the problem decomposable
2. Can solution steps be ignored or undone?
In the Theorem Proving problem, a lemma that has been proved can be ignored for the next steps.
Such problems are called Ignorable problems.
In the 8-Puzzle, Moves can be undone and backtracked.
Such problems are called Recoverable problems.
In Playing Chess, moves can be retracted.
Such problems are called Irrecoverable problems.
Ignorable problems can be solved using a simple control structure that never backtracks. Recoverable problems can be solved using backtracking. Irrecoverable problems can be solved by recoverable style methods via planning.
3. Is the universe of the problem is predictable?
In Playing Bridge, We cannot know exactly where all the cards are or what the other players will do on their turns.
Uncertain outcome!
For certain-outcome problems, planning can be used to generate a sequence of operators that is guaranteed to lead to a solution.
For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided.
4. Is a good solution to the problem is absolute or relative?
The Travelling Salesman Problem, we have to try all paths to find the shortest one.
See also Generate and Test Heuristic Search - Artificial Intelligence
Any path problem can be solved using heuristics that suggest good paths to explore.
For best-path problems, a much more exhaustive search will be performed.
5. Is the solution to the problem a state or a path
The Water Jug Problem, the path that leads to the goal must be reported.
This document discusses top-down parsing and LL parsing. It begins by defining top-down parsing as discovering a parse tree from top to bottom using a preorder traversal and leftmost derivation. It then explains that LL parsing is a technique of top-down parsing that uses a parser stack, parsing table, and driver function. It identifies left recursion and left factoring as problems for LL(1) parsers, as they prevent deterministic parsing. It provides examples of eliminating left recursion and left factoring through grammar transformations.
This is about a topic of compiler design, LR and SLR parsing algorithm and LR grammar, Canonical collection and Item, Conflict in LR parsing shift reduce. Classification of Bottom up parsing.
This document discusses top-down parsing and different types of top-down parsers, including recursive descent parsers, predictive parsers, and LL(1) grammars. It explains how to build predictive parsers without recursion by using a parsing table constructed from the FIRST and FOLLOW sets of grammar symbols. The key steps are: 1) computing FIRST and FOLLOW, 2) filling the predictive parsing table based on FIRST/FOLLOW, 3) using the table to parse inputs in a non-recursive manner by maintaining the parser's own stack. An example is provided to illustrate constructing the FIRST/FOLLOW sets and parsing table for a sample grammar.
This document provides an overview of different types of compilers. It discusses incremental compilers, cross compilers, load & go compilers, threaded code compilers, stage compilers, just-in-time (JIT) compilers, parallelizing compilers, one pass compilers, and multi pass compilers. For each type of compiler, it briefly describes what it is and how it works. The key information covered includes that incremental compilers only recompile modified source code, cross compilers produce target code for a different machine, JIT compilers compile bytecode to native machine code just before execution, and multi pass compilers perform multiple scans of the source code to complete compilation tasks.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
A parser breaks down input into smaller elements for translation into another language. It takes a sequence of tokens as input and builds a parse tree or abstract syntax tree. In the compiler model, the parser verifies that the token string can be generated by the grammar and returns any syntax errors. There are two main types of parsers: top-down parsers start at the root and fill in the tree, while bottom-up parsers start at the leaves and work upwards. Syntax directed definitions associate attributes with grammar symbols and specify attribute values with semantic rules for each production.
This is the presentation on Syntactic Analysis in NLP.It includes topics like Introduction to parsing, Basic parsing strategies, Top-down parsing, Bottom-up
parsing, Dynamic programming – CYK parser, Issues in basic parsing methods, Earley algorithm, Parsing
using Probabilistic Context Free Grammars.
This document discusses recursive descent parsing, which is a top-down parsing method that uses a set of recursive procedures to analyze the syntax of a program. Each nonterminal in a grammar is associated with a procedure. It attempts to construct a parse tree starting from the root node and creating child nodes in a preorder traversal. Recursive descent parsing can involve backtracking if the initial parsing path fails. An example grammar and parsing procedures using backtracking are provided to illustrate the technique.
Depth-first search (DFS) is an algorithm that explores all the vertices reachable from a starting vertex by traversing edges in a depth-first manner. DFS uses a stack data structure to keep track of vertices to visit. It colors vertices white, gray, and black to indicate their status. DFS runs in O(V+E) time and can be used for applications like topological sorting and finding strongly connected components. The edges discovered during DFS can be classified as tree, back, forward, or cross edges based on the order in which vertices are discovered.
Breadth First Search & Depth First SearchKevin Jadiya
The slides attached here describes how Breadth first search and Depth First Search technique is used in Traversing a graph/tree with Algorithm and simple code snippet.
This document discusses compiler design and how compilers work. It begins with prerequisites and definitions of compilers and their origins. It then describes the architecture of compilers, including lexical analysis, parsing, semantic analysis, code optimization, and code generation. It explains how compilers translate high-level code into machine-executable code. In conclusions, it summarizes that compilers translate code without changing meaning and aim to make code efficient. References for further reading on compiler design principles are also provided.
This document discusses single pass assemblers. It notes that single pass assemblers scan a program once to create the equivalent binary, substituting symbolic instructions with machine code. However, this can cause forward reference problems when symbols are used before being defined. The document describes two solutions for single pass assemblers: 1) eliminating forward references by defining all labels before use or prohibiting forward data references, and 2) generating object code directly in memory without writing to disk, requiring reassembly each time.
Syntax-Directed Translation into Three Address Codesanchi29
The document discusses syntax-directed translation of code into three-address code. It defines semantic rules for generating three-address code for expressions, boolean expressions, and control flow statements. Temporary variables are generated for subexpressions and intermediate values. The semantic rules specify generating three-address code statements using temporary variables. Backpatching is also discussed as a technique to replace symbolic names in goto statements with actual addresses after code generation.
The document discusses compilers and their role in translating high-level programming languages into machine-readable code. It notes that compilers perform several key functions: lexical analysis, syntax analysis, generation of an intermediate representation, optimization of the intermediate code, and finally generation of assembly or machine code. The compiler allows programmers to write code in a high-level language that is easier for humans while still producing efficient low-level code that computers can execute.
1) A semaphore consists of a counter, a waiting list, and wait() and signal() methods. Wait() decrements the counter and blocks if it becomes negative, while signal() increments the counter and resumes a blocked process if the counter becomes positive.
2) The dining philosophers problem is solved using semaphores to lock access to shared chopsticks, with one philosopher designated as a "weirdo" to avoid deadlock by acquiring locks in a different order.
3) The producer-consumer problem uses three semaphores - one to limit buffer size, one for empty slots, and one for locks - to coordinate producers adding to a bounded buffer
This produced by straight forward compiling algorithms made to run faster or less space or both. This improvement is achieved by program transformations that are traditionally called optimizations.compiler that apply-code improving transformation are called optimizing compilers.
Problem reduction AND OR GRAPH & AO* algorithm.pptarunsingh660
This document summarizes AND-OR graphs and the AO* algorithm. It defines AND-OR graphs as being useful for representing problems that can be solved by decomposing them into smaller subproblems. It then outlines the basic AND-OR graph algorithm involving initializing a graph, expanding nodes, and computing f' values for successor nodes. Finally, it describes the key aspects of the AO* algorithm, which uses AND-OR graphs to represent problems that can be divided into parts that can be combined. The AO* algorithm involves initializing a graph with the initial node, expanding nodes, generating successors, computing h' values, and propagating decision information back through the graph.
1) LR(0) parsers use left-to-right, rightmost derivations with 0-token lookahead to parse context-free grammars deterministically.
2) The states of the LR(0) automaton are sets of parsing items that indicate the progress of recognizing productions.
3) Parsing tables are constructed from the automaton to specify the shift and reduce actions and state transitions based on the next input symbol.
The document discusses three address code, which is an intermediate code used by optimizing compilers. Three address code breaks expressions down into separate instructions that use at most three operands. Each instruction performs an assignment or binary operation on the operands. The code is implemented using quadruple, triple, or indirect triple representations. Quadruple representation stores each instruction in four fields for the operator, two operands, and result. Triple avoids temporaries by making two instructions. Indirect triple uses pointers to freely reorder subexpressions.
Problem Characteristics in Artificial Intelligence,
Unit -2 Problem Solving and Searching Techniques
o choose an appropriate method for a particular problem first we need to categorize the problem based on the following characteristics.
Is the problem decomposable into small sub-problems which are easy to solve?
Can solution steps be ignored or undone?
Is the universe of the problem is predictable?
Is a good solution to the problem is absolute or relative?
Is the solution to the problem a state or a path?
What is the role of knowledge in solving a problem using artificial intelligence?
Does the task of solving a problem require human interaction?
1. Is the problem decomposable into small sub-problems which are easy to solve?
Can the problem be broken down into smaller problems to be solved independently?
See also Water Jug Problem in Artificial Intelligence
The decomposable problem can be solved easily.
Example: In this case, the problem is divided into smaller problems. The smaller problems are solved independently. Finally, the result is merged to get the final result.
Is the problem decomposable
2. Can solution steps be ignored or undone?
In the Theorem Proving problem, a lemma that has been proved can be ignored for the next steps.
Such problems are called Ignorable problems.
In the 8-Puzzle, Moves can be undone and backtracked.
Such problems are called Recoverable problems.
In Playing Chess, moves can be retracted.
Such problems are called Irrecoverable problems.
Ignorable problems can be solved using a simple control structure that never backtracks. Recoverable problems can be solved using backtracking. Irrecoverable problems can be solved by recoverable style methods via planning.
3. Is the universe of the problem is predictable?
In Playing Bridge, We cannot know exactly where all the cards are or what the other players will do on their turns.
Uncertain outcome!
For certain-outcome problems, planning can be used to generate a sequence of operators that is guaranteed to lead to a solution.
For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided.
4. Is a good solution to the problem is absolute or relative?
The Travelling Salesman Problem, we have to try all paths to find the shortest one.
See also Generate and Test Heuristic Search - Artificial Intelligence
Any path problem can be solved using heuristics that suggest good paths to explore.
For best-path problems, a much more exhaustive search will be performed.
5. Is the solution to the problem a state or a path
The Water Jug Problem, the path that leads to the goal must be reported.
This document discusses top-down parsing and LL parsing. It begins by defining top-down parsing as discovering a parse tree from top to bottom using a preorder traversal and leftmost derivation. It then explains that LL parsing is a technique of top-down parsing that uses a parser stack, parsing table, and driver function. It identifies left recursion and left factoring as problems for LL(1) parsers, as they prevent deterministic parsing. It provides examples of eliminating left recursion and left factoring through grammar transformations.
This is about a topic of compiler design, LR and SLR parsing algorithm and LR grammar, Canonical collection and Item, Conflict in LR parsing shift reduce. Classification of Bottom up parsing.
This document discusses top-down parsing and different types of top-down parsers, including recursive descent parsers, predictive parsers, and LL(1) grammars. It explains how to build predictive parsers without recursion by using a parsing table constructed from the FIRST and FOLLOW sets of grammar symbols. The key steps are: 1) computing FIRST and FOLLOW, 2) filling the predictive parsing table based on FIRST/FOLLOW, 3) using the table to parse inputs in a non-recursive manner by maintaining the parser's own stack. An example is provided to illustrate constructing the FIRST/FOLLOW sets and parsing table for a sample grammar.
This document provides an overview of different types of compilers. It discusses incremental compilers, cross compilers, load & go compilers, threaded code compilers, stage compilers, just-in-time (JIT) compilers, parallelizing compilers, one pass compilers, and multi pass compilers. For each type of compiler, it briefly describes what it is and how it works. The key information covered includes that incremental compilers only recompile modified source code, cross compilers produce target code for a different machine, JIT compilers compile bytecode to native machine code just before execution, and multi pass compilers perform multiple scans of the source code to complete compilation tasks.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
A parser breaks down input into smaller elements for translation into another language. It takes a sequence of tokens as input and builds a parse tree or abstract syntax tree. In the compiler model, the parser verifies that the token string can be generated by the grammar and returns any syntax errors. There are two main types of parsers: top-down parsers start at the root and fill in the tree, while bottom-up parsers start at the leaves and work upwards. Syntax directed definitions associate attributes with grammar symbols and specify attribute values with semantic rules for each production.
This is the presentation on Syntactic Analysis in NLP.It includes topics like Introduction to parsing, Basic parsing strategies, Top-down parsing, Bottom-up
parsing, Dynamic programming – CYK parser, Issues in basic parsing methods, Earley algorithm, Parsing
using Probabilistic Context Free Grammars.
Top-down parsing constructs the parse tree from the top-down and left-to-right. Recursive descent parsing uses backtracking to find the left-most derivation, while predictive parsing does not require backtracking by using a special form of grammars called LL(1) grammars. Non-recursive predictive parsing is also known as LL(1) parsing and uses a table-driven approach without recursion or backtracking.
A parser is a program component that breaks input data into smaller elements according to the rules of a formal grammar. It builds a parse tree representing the syntactic structure of the input based on these grammar rules. There are two main types of parsers: top-down parsers start at the root of the parse tree and work downward, while bottom-up parsers start at the leaves and work upward. Parser generators use attributes like First and Follow to build parsing tables for predictive parsers like LL(1) parsers, which parse input from left to right based on a single lookahead token.
The document discusses different types of parsing techniques:
- Parsing is the process of analyzing a string of tokens based on the rules of a formal grammar. It involves constructing a parse tree that represents the syntactic structure of the string based on the grammar.
- The main types of parsing are top-down parsing and bottom-up parsing. Top-down parsing constructs the parse tree from the root node down, while bottom-up parsing constructs it from the leaf nodes up.
- Predictive and recursive descent parsing are forms of top-down parsing, while shift-reduce parsing is a common bottom-up technique. Each method has advantages and limitations regarding efficiency and the type of grammar they can handle.
The document discusses parsing and compiler design concepts. It defines parsing as verifying that a string of tokens can be generated by a grammar and constructing a parse tree. It covers parse tree vs syntax tree, different types of grammars, derivation and reduction processes, ambiguous grammars, left recursion elimination, left factoring, and computing first and follow sets. The key topics are role of parsers, parse trees, grammar classification, derivation, ambiguous grammars, parsing techniques like top-down and bottom-up, and syntax analysis concepts.
1) The document discusses parsing methods for context-free grammars including top-down and bottom-up approaches. Top-down parsing starts with the start symbol and works towards the leaves, while bottom-up parsing begins at the leaves and works towards the root.
2) Key aspects of parsing covered include left recursion elimination, left factoring, shift-reduce parsing which uses a stack and parsing table, and constructing parse trees from the parsing process.
3) The output of parsers can include parse sequences, parse trees, and abstract syntax trees which abstract away implementation details.
Theory of automata and formal language lab manualNitesh Dubey
The document describes several experiments related to compiler design including lexical analysis, parsing, and code generation.
Experiment 1 involves writing a program to identify if a given string is an identifier or not using a DFA. Experiment 2 simulates a DFA to check if a string is accepted by the given automaton. Experiment 3 checks if a string belongs to a given grammar using a top-down parsing approach. Experiment 4 implements recursive descent parsing to parse expressions based on a grammar. Experiment 5 computes FIRST and FOLLOW sets and builds a LL(1) parsing table for a given grammar. Experiment 6 implements shift-reduce parsing to parse strings. Experiment 7 generates intermediate code like Polish notation, 3-address code, and quadruples
The document discusses different types of parsing including:
1) Top-down parsing which starts at the root node and builds the parse tree recursively, requiring backtracking for ambiguous grammars.
2) Bottom-up parsing which starts at the leaf nodes and applies grammar rules in reverse to reach the start symbol using shift-reduce parsing.
3) LL(1) and LR parsing which are predictive parsing techniques using parsing tables constructed from FIRST and FOLLOW sets to avoid backtracking.
The document discusses syntax analysis, which is the second phase of compiler construction. It involves parsing the source code using a context-free grammar to check syntax and generate a parse tree. A parser checks if the code satisfies the grammar rules. Grammars can be ambiguous if they allow more than one parse. Left recursion, where a non-terminal derives itself, must be removed as it causes issues for top-down parsers. The document explains how to systematically eliminate immediate and non-immediate left recursion from grammars through substitution.
This document discusses top-down parsing and recursive descent parsing. It provides an example grammar and walks through top-down and bottom-up parses of a sample string. Recursive descent parsing is explained, with examples of how to write parsing functions for different grammar rules. The concepts of first sets and follow sets are introduced, which are needed to write predictive parsers without backtracking. Algorithms for computing first and follow sets are also provided.
This document discusses bottom-up parsing and shift-reduce parsing. It explains that bottom-up parsing constructs a parse tree beginning with the leaves and working up to the root. Shift-reduce parsing uses two main actions: shift, which pushes the current input symbol onto a stack, and reduce, which replaces symbols on the top of the stack with a non-terminal according to a production rule. An example is provided to demonstrate shift-reduce parsing through handle pruning, which finds handles within right sentential forms to trace the reverse of a rightmost derivation.
5-Introduction to Parsing and Context Free Grammar-09-05-2023.pptxvenkatapranaykumarGa
The document provides information about parsing and context-free grammars. It defines key concepts such as nonterminals, terminals, productions, derivations, ambiguity, left recursion, left factoring, LL(1) parsing, and computing first sets. It also lists different types of parsing including top-down parsing, bottom-up parsing, backtracking, predictive parsing, LR parsing, operator precedence parsing, and recursive descent parsing.
This chapter discusses syntax analysis and parsing. It covers topics such as syntax analyzers, context-free grammars, parse trees, ambiguity, left-recursion, left-factoring, and predictive parsing. Syntax analyzers check that a program satisfies the rules of a context-free grammar and build a parse tree. Grammars must be unambiguous and free of left-recursion to be suitable for top-down parsing techniques.
The document describes a syntax analyzer (also known as a parser) which checks if a given source program satisfies the rules of a context-free grammar. The parser creates a parse tree representing the syntactic structure of the program if it satisfies the grammar. Context-free grammars use productions rules and define the syntax of a programming language. Parsers can be top-down or bottom-up and work on a stream of tokens from a lexical analyzer. Ambiguous grammars require disambiguation to ensure a unique parse tree for each program.
what is Parsing
different types of parsing
what is parser and role of parser
what is top-down parsing and bottom-up parsing
what is the problem in top-down parsing
design of top-down parsing and bottom-up parsing
examples of top-down parsing and bottom-up parsing
LL parsing methods use top-down parsing with limited left-to-right lookahead. To build an LL(1) parsing table, the document discusses computing first and follow sets for a grammar's nonterminals. It also defines nullability and describes how LL(1) grammars have disjoint first sets and follow sets to allow unambiguous prediction of productions during parsing. An example LL(1) parsing table is given for an expression grammar.
The document discusses various topics related to syntax analysis and parsing, including ambiguous grammars, elimination of ambiguity, resolving problems with ambiguous and left recursive grammars, top-down parsing using recursive descent, predictive parsing, recursive predictive parsing, the FIRST and FOLLOW functions, and an example of computing FIRST and FOLLOW sets.
Bottom-up parsing constructs a parse tree for an input string beginning at the leaves and working up to the root. It involves two main actions: shift, which pushes the current input symbol onto a stack, and reduce, which replaces symbols on the top of the stack with a non-terminal symbol according to a production rule. A shift-reduce parser tries to reduce the input string to the start symbol through a sequence of reductions that trace the reverse of the rightmost derivation. It determines which substring to reduce at each step based on the concept of a "handle", which is a substring that matches a production rule.
Credit : Nusrat Jahan & Fahima Hossain , Dept. of CSE, JnU, Dhaka.
Randomized Algorithm- Advanced Algorithm, Deterministic, Non Deterministic, LAS Vegas, MONTE Carlo Algorithm.
Cloudonomics is the economics of cloud computing. It provides an overall understanding of the business value of cloud computing for managers, executives, and strategists across industries. Some key economic aspects of cloud computing include economies of scale, location independence through dispersed infrastructure, unit or pay-per-use pricing, and on-demand scalable resources without upfront costs. The laws of cloudonomics establish that utility pricing is more cost effective than fixed infrastructure when demand varies, on-demand resources reduce the need for forecasting, aggregate cloud demand is smoother than individual demands, and large cloud providers benefit from economies of scale.
Constraint satisfaction problems (CSPs) involve assigning values to variables from given domains so that all constraints are satisfied. CSPs provide a general framework that can model many combinatorial problems. A CSP is defined by variables that take values from domains, and constraints specifying allowed value combinations. Real-world CSPs include scheduling, assignment problems, timetabling, mapping coloring and puzzles. Examples provided include cryptarithmetic, Sudoku, 4-queens, and graph coloring.
This document summarizes geographical routing in wireless sensor networks. It begins with an introduction to geographic routing protocols, which route packets based on the geographic position of nodes rather than their network addresses. It then discusses several specific geographic routing protocols, including Greedy Perimeter Stateless Routing (GPSR) and Geographical and Energy Aware Routing (GEAR). The document also covers topics like how nodes obtain location information, security issues in geographic routing like the Sybil attack, and concludes that geographic routing can enable scalable and energy-efficient routing in wireless sensor networks.
Streaming stored video allows video playback to begin before the entire file has been downloaded. It works by storing/buffering portions of the video at the client. There are three main types of streaming: UDP streaming, HTTP streaming, and adaptive HTTP streaming. HTTP streaming is most common today and works by transmitting the video file over HTTP as quickly as the network allows. Adaptive streaming addresses limitations of standard HTTP streaming by allowing clients to switch between multiple encodings of the video to adapt to changing network conditions.
Random Oracle Model & Hashing - Cryptography & Network SecurityMahbubur Rahman
This document discusses hashing and the random oracle model. It defines cryptographic hash functions as deterministic functions that map arbitrary strings to fixed-length outputs in a way that appears random. The random oracle model assumes an ideal hash function that behaves like a random function. The document discusses collision resistance, preimage resistance, and birthday attacks as they relate to finding collisions or preimages with a given hash function. It provides examples of calculating the number of messages an attacker would need to find collisions or preimages with different probabilities. The document concludes by listing some applications of cryptographic hash functions like password storage, file authenticity, and digital signatures.
Modern Block Cipher- Modern Symmetric-Key CipherMahbubur Rahman
Introduction to Modern Symmetric-Key Ciphers- This lecture will cover only "Modern Block Cipher".
Slide Credit: Maleka Khatun & Mahbubur Rahman
Dept. of CSE, JnU, BD.
The document provides information about web servers, database servers, and popular open source software used for each. It discusses what a web server and database server are, how they work, examples of common software like Apache and MySQL, and steps to install and configure Apache and MySQL on Ubuntu.
This document contains information about Lex, Yacc, Flex, and Bison. It provides definitions and descriptions of each tool. Lex is a lexical analyzer generator that reads input specifying a lexical analyzer and outputs C code implementing a lexer. Yacc is a parser generator that takes a grammar description and snippets of C code as input and outputs a shift-reduce parser in C. Flex is a tool similar to Lex for generating scanners based on regular expressions. Bison is compatible with Yacc and can be used to develop language parsers.
Decolonizing Universal Design for LearningFrederic Fovet
UDL has gained in popularity over the last decade both in the K-12 and the post-secondary sectors. The usefulness of UDL to create inclusive learning experiences for the full array of diverse learners has been well documented in the literature, and there is now increasing scholarship examining the process of integrating UDL strategically across organisations. One concern, however, remains under-reported and under-researched. Much of the scholarship on UDL ironically remains while and Eurocentric. Even if UDL, as a discourse, considers the decolonization of the curriculum, it is abundantly clear that the research and advocacy related to UDL originates almost exclusively from the Global North and from a Euro-Caucasian authorship. It is argued that it is high time for the way UDL has been monopolized by Global North scholars and practitioners to be challenged. Voices discussing and framing UDL, from the Global South and Indigenous communities, must be amplified and showcased in order to rectify this glaring imbalance and contradiction.
This session represents an opportunity for the author to reflect on a volume he has just finished editing entitled Decolonizing UDL and to highlight and share insights into the key innovations, promising practices, and calls for change, originating from the Global South and Indigenous Communities, that have woven the canvas of this book. The session seeks to create a space for critical dialogue, for the challenging of existing power dynamics within the UDL scholarship, and for the emergence of transformative voices from underrepresented communities. The workshop will use the UDL principles scrupulously to engage participants in diverse ways (challenging single story approaches to the narrative that surrounds UDL implementation) , as well as offer multiple means of action and expression for them to gain ownership over the key themes and concerns of the session (by encouraging a broad range of interventions, contributions, and stances).
Artificial Intelligence (AI) has revolutionized the creation of images and videos, enabling the generation of highly realistic and imaginative visual content. Utilizing advanced techniques like Generative Adversarial Networks (GANs) and neural style transfer, AI can transform simple sketches into detailed artwork or blend various styles into unique visual masterpieces. GANs, in particular, function by pitting two neural networks against each other, resulting in the production of remarkably lifelike images. AI's ability to analyze and learn from vast datasets allows it to create visuals that not only mimic human creativity but also push the boundaries of artistic expression, making it a powerful tool in digital media and entertainment industries.
The Science of Learning: implications for modern teachingDerek Wenmoth
Keynote presentation to the Educational Leaders hui Kōkiritia Marautanga held in Auckland on 26 June 2024. Provides a high level overview of the history and development of the science of learning, and implications for the design of learning in our modern schools and classrooms.
Creativity for Innovation and SpeechmakingMattVassar1
Tapping into the creative side of your brain to come up with truly innovative approaches. These strategies are based on original research from Stanford University lecturer Matt Vassar, where he discusses how you can use them to come up with truly innovative solutions, regardless of whether you're using to come up with a creative and memorable angle for a business pitch--or if you're coming up with business or technical innovations.
8+8+8 Rule Of Time Management For Better ProductivityRuchiRathor2
This is a great way to be more productive but a few things to
Keep in mind:
- The 8+8+8 rule offers a general guideline. You may need to adjust the schedule depending on your individual needs and commitments.
- Some days may require more work or less sleep, demanding flexibility in your approach.
- The key is to be mindful of your time allocation and strive for a healthy balance across the three categories.
2. MODEL OF COMPILER FRONT END 2
Front End
Also called parsing , where
generates parse tree
Syntax analysis
3. PARSING 3
When the parser starts constructing the
parse tree from the start symbol and then
tries to transform the start symbol to the
input, it is called top-down parsing.
Where bottom-up parsing starts with
the input symbols and tries to construct
the parse tree up to the start symbol.
4. TOP DOWN PERSER 4
Predictive parser is a recursive descent
parser, which has the capability to predict
which production is to be used to replace
the input string. The predictive parser
does not suffer from backtracking.
5. PREDICTIVE PARSER 5
Predictive parsing uses a stack and
a parsing table to parse the input
and generate a parse tree.
Both the stack and the input
contains an end symbol $to denote
that the stack is empty and the input
is consumed.
The parser refers to the parsing
table to take any decision on the
input and stack element
combination.
6. LL(1) PARSER 6
An LL parser is called an LL(k) parser if
it uses k tokens of look ahead when
parsing a sentence.
LL grammars, particularly LL(1)
grammars, as parsers are easy to
construct, and many computer
languages are designed to be LL(1) for
this reason.
The 1 stands for using one input symbol
of look ahead at each step to make
parsing action decision.
7. CONTINUE…
LL(k) parsers must predict which production replace a non-terminal with as soon as
they see the non-terminal. The basic LL algorithm starts with a stack containing [S, $]
(top to bottom) and does whichever of the following is applicable until done:
If the top of the stack is a non-terminal, replace the top of the stack with one of
the productions for that non-terminal, using the next k input symbols to decide
which one (without moving the input cursor), and continue.
If the top of the stack is a terminal, read the next input token. If it is the same
terminal, pop the stack and continue. Otherwise, the parse has failed and the
algorithm finishes.
If the stack is empty, the parse has succeeded and the algorithm finishes. (We
assume that there is a unique EOF-marker $ at the end of the input.)
So look ahead meaning is - looking at input tokens without moving the input
cursor.
7
8. PRIME REQUIREMENT OF LL(1)
The grammar must be -
no left factoring
no left recursion
FIRST() & FOLLOW()
Parsing Table
Stack Implementation
Parse Tree
8
10. LEFT FACTORING
A grammar is said to be left factored when it is of the form –
A -> αβ1 | αβ2 | αβ3 | …… | αβn | γ
The productions start with the same terminal (or set of terminals).
When the choice between two alternative A-productions is not clear, we
may be able to rewrite the productions to defer the decision until enough
of the input has been seen to make the right choice.
For the grammar
A -> αβ1 | αβ2 | αβ3 | …… | αβn | γ
The equivalent left factored grammar will be –
A -> αA’ | γ
A’ -> β1 | β2 | β3 | …… | βn
10
11. CONTINUE…
For example :
the input string is - aab & grammar is
S ->aAb|aA|ab
A ->bAc|ab
After removing left factoring -
S ->aA’
A’ ->Ab|A|b
A ->ab|bAc
11
13. RECURSION
RECURSION:
The process in which a function calls itself directly or indirectly is called recursion
and the corresponding function is called as recursive function.
TYPES OF RECURSION
LEFT RECURSION RIGHT RECURSION
13
14. Left Recursion Right Recursion
For grammar: For grammar:
A -> A | β A -> A| β
A
A A
A A
A A
β A
β
This parse tree generate β * This parse tree generate * β
14
15. Right recursion-
A production of grammar is said to have right recursion if
the right most variable RHS is same as
variable of its LHS. e.g. A -> A| β
A grammar containing a production having right recursion is
called as a right recursive grammar.
Right recursion does not create any problem for the top
down parsers.
Therefore, there is no need of eliminating right recursion
from the grammar.
15
16. Left recursion-
A production of grammar is said to have left
recursion if the leftmost variable of its RHS is same
as variable of its LHS. e.g. A -> A | β
A grammar containing a production having left
recursion is called as a left recursive grammar.
Left recursion is eliminated because top down
parsing method can not handle left recursive
grammar.
16
17. Left Recursion
A grammar is left recursive if it has a nonterminal A such that there is a
derivation
A -> A | β for some string .
Immediate/direct left recursion:
A production is immediately left recursive if its left hand side and the head of its
right hand side are the same symbol, e.g. A ->A , where α is sequence
of non terminals and terminals.
Indirect left recursion:
Indirect left recursion occurs when the definition of left recursion is satisfied via
several substitutions. It entails a set of rules following the pattern
. A → Br
B → Cs
C → At
Here, starting with a, we can derive A -> Atsr
17
18. Elimination of Left-Recursion
Suppose the grammar were
A A |
How could the parser decide how many times to use the production A
A before using the production A --> ?
Left recursion in a production may be removed by transforming the
grammar in the following way.
Replace
A A |
With
A A'
A' A' |
18
19. EXAMPLE OF IMMEDIATE LEFT RECURSION:
Consider the left recursive grammar
E E + T | T
T T * F | F
F (E) | id
Apply the transformation to E:
E T E'
E' + T E' |
Then apply the transformation to T:
T F T'
T' * F T' |
Now the grammar is
E T E'
E' + T E' |
T F T'
T' * F T' |
F (E) | id
19
20. Continue…
The case of several immediate left recursive -productions. Assume
that the set of all -productions has the form
A → A 1 | A 2 | · · · | A m | β1 | β2| · · · | βn
Represents all the -productions of the grammar, and no βi begins
with A, then we can replace these -productions by
A →β1A′ | β2A′| · · · | βnA′
A′ → 1A′ | 2A′ | · · · | mA′ |
20
21. Example:
Consider the left recursive grammar
S → SX | SSb| XS | a
X → Xb | Sa
Apply the transformation to S:
S → XSS′ | aS′
S′ → XS′ | SbS′ | ε
Apply the transformation to X:
X → SaX′
X′ → bX′ | ε
Now the grammar is
S → XSS′ | aS′
S′ → XS′ | SbS′ | ε
X → SaX′
X′ → bX′ | ε
21
22. Example of elimination indirect left
recursion:
S A A | 0
A S S | 1
Considering the ordering S, A, we get:
S A A | 0
A S | 0S | 1
And removing immediate left recursion, we get
S A A | 0
A 0S A′ | 1A′
A′ ε | AS A′
22
24. Why using FIRST and FOLLOW:
During parsing FIRST and FOLLOW help us to choose
which production to apply , based on the next input signal.
We know that we need of backtracking in syntax analysis, which is
really a complex process to implement. There can be easier way to
sort out this problem by using FIRST AND FOLLOW.
If the compiler would have come to know in advance, that what is the
“first character of the string produced when a production rule is
applied”, and comparing it to the current character or token in the input
string it sees, it can wisely take decision on which production rule to
apply .
FOLLOW is used only if the current non terminal can derive ε .
24
25. Rules of FIRST
FIRST always find out the terminal symbol from the grammar.
When we check out FIRST for any symbol then if we find any
terminal symbol in first place then we take it. And not to see the
next symbol.
If a grammar is
A → a then FIRST ( A )={ a }
If a grammar is
A → a B then FIRST ( A )={ a }
25
26. Rules of FIRST
If a grammar is
A → aB ǀ ε then FIRST ( A )={ a, ε }
If a grammar is
A → BcD ǀ ε
B → eD ǀ ( A )
Here B is non terminal. So, we check the transition of B and
find the FIRST of A.
then FIRST ( A )={ e,( , ε }
26
27. Rules of FOLLOW
For doing FOLLOW operation we need FIRST operation mostly. In FOLLOW
we use a $ sign for the start symbol. FOLLOW always check the right
portion of the symbol.
If a grammar is
A → BAc ; A is start symbol.
Here firstly check if the selected symbol stays in right side of the grammar.
We see that c is right in A.
then FOLLOW (A) = {c , $ }
27
28. Rules of FOLLOW
If a grammar is
A → BA’
A' →*Bc
Here we see that there is nothing at the right side of A' . So
FOLLOW ( A' )= FOLLOW ( A )= { $ }
Because A' follows the start symbol.
28
29. Rules of FOLLOW
If a grammar is
A → BC
B → Td
C →*D ǀ ε
When we want to find FOLLOW (B),we see that B follows by C . Now
put the FIRST( C) in the there.
FIRST(C)={*, ε }.
But when the value is € it follows the parents symbol. So
FOLLOW(B)={*,$ }
29
30. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E
E’
T
T’
F
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
30
31. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id }
E'
T
T'
F
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
31
32. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id }
E' { + , ε }
T
T'
F
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
32
33. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id }
E' { + , ε }
T { id , ( }
T'
F
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
33
34. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id }
E' { + , ε }
T { id , ( }
T' { * , ε }
F
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
34
35. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id }
E' { + , ε }
T { id , ( }
T' { * , ε }
F { id , ( }
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
35
36. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id } { $ , ) }
E' { + , ε }
T { id , ( }
T' { * , ε }
F { id , ( }
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
36
37. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id } { $ , ) }
E' { + , ε } { $ , ) }
T { id , ( }
T' { * , ε }
F { id , ( }
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
37
38. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id } { $ , ) }
E' { + , ε } { $ , ) }
T { id , ( } { $ , ) ,+ }
T' { * , ε }
F { id , ( }
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
38
39. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id } { $ , ) }
E' { + , ε } { $ , ) }
T { id , ( } { $ , ) ,+ }
T' { * , ε } { $ , ) , + }
F { id , ( }
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
39
40. Example of FIRST and FOLLOW
Symbol FIRST FOLLOW
E { ( , id } { $ , ) }
E' { + , ε } { $ , ) }
T { id , ( } { $ , ) ,+ }
T' { * , ε } { $ , ) , + }
F { id , ( } { $ , ) , + , * }
GRAMMAR:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
40
42. Example of LL(1) grammar
E -> TE’
E’ -> +TE’|ε
T -> FT’
T’ -> *FT’|ε
F -> (E)|id
42
43. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
43
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E
E’
T
T’
F
44. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
44
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’
E’
T
T’
F
45. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
45
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’
T
T’
F
46. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
46
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’
T
T’
F
47. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
47
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’ E’ -> ε E’ -> ε
T
T’
F
48. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
48
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’ E’ -> ε E’ -> ε
T T -> FT’
T’
F
49. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
49
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’
F
50. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
50
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> *FT’
F
51. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
51
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F
52. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
52
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id
53. TABLE:
PARSING
TABLE
TABLE:
FIRST & FOLLOW
53
Production Symbol FOLLOW
E -> TE’ { ( , id } { $ , ) }
E’ -> +TE’|ε { + , ε } { $ , ) }
T -> FT’ { ( , id } { + , $ , ) }
T’ -> *FT’|ε { * , ε } { + , $ , ) }
F -> (E)|id { ( , id } { *, + , $ , ) }
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id F -> (E)
54. Continue…
54
TABLE: PARSING TABLE
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E’-> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id F -> (E)
This grammar is LL(1).
So, the parse tree can be derived from the stack
implementation of the given parsing table.
55. There are grammars which may requite LL(1) parsing.
For e.g. Look at next grammar…..
55
Continue…
56. Continue…
GRAMMAR:
S iEtSS’ | a
S’ eS |ε
E b
SYMBOL FIRST FOLLOW
S a , i $ , e
S’ e , ε $ , e
E b t
TABLE: FIRST & FOLLOW
56
57. TABLE:
PARSING TABLE
TABLE:
FAST & FOLLOW
57
SYMBOL FIRST FOLLOW
S iEtSS’ | a
a , i $ , e
S’ eS |ε
e, ε $ , e
E b
b t
Non
Terminal
INPUT SYMBOLS
a b e i t $
S
S’
E
58. TABLE:
PARSING TABLE
TABLE:
FAST & FOLLOW
58
SYMBOL FIRST FOLLOW
S iEtSS’ | a
a , i $ , e
S’ eS |ε
e, ε $ , e
E b
b t
Non
Terminal
INPUT SYMBOLS
a b e i t $
S Sa
S’
E
59. TABLE:
PARSING TABLE
TABLE:
FAST & FOLLOW
59
SYMBOL FIRST FOLLOW
S iEtSS’ | a
a , i $ , e
S’ eS |ε
e, ε $ , e
E b
b t
Non
Terminal
INPUT SYMBOLS
a b e i t $
S Sa
SiEtSS
’
S’
E
60. TABLE:
PARSING TABLE
TABLE:
FAST & FOLLOW
60
SYMBOL FIRST FOLLOW
S iEtSS’ | a
a , i $ , e
S’ eS |ε
e, ε $ , e
E b
b t
Non
Terminal
INPUT SYMBOLS
a b e i t $
S Sa
SiEtSS
’
S’ S’ eS
E
61. TABLE:
PARSING TABLE
TABLE:
FAST & FOLLOW
61
SYMBOL FIRST FOLLOW
S iEtSS’ | a
a , i $ , e
S’ eS |ε
e, ε $ , e
E b
b t
Non
Terminal
INPUT SYMBOLS
a b e i t $
S Sa
SiEtSS
’
S’
S’ eS
S’ε
S’ε
E
62. TABLE:
PARSING TABLE
TABLE:
FAST & FOLLOW
62
SYMBOL FIRST FOLLOW
S iEtSS’ | a
a , i $ , e
S’ eS |ε
e, ε $ , e
E b
b t
Non
Terminal
INPUT SYMBOLS
a b e i t $
S Sa
SiEtSS
’
S’
S’ eS
S’ε
S’ε
E E b
63. TABLE:
PARSING TABLE
TABLE:
FAST & FOLLOW
63
AMBIGUITY
SYMBOL FIRST FOLLOW
S iEtSS’ | a
a , i $ , e
S’ eS |ε
e, ε $ , e
E b
b t
Non
Terminal
INPUT SYMBOLS
a b e i t $
S Sa
SiEtSS
’
S’
S’ eS
S’ε
S’ε
E E b
64. Continue…
The grammar is ambiguous and it is evident by the fact that
we have two entries corresponding to M[S’,e] containing
S’ ε and S’ eS.
Note that the ambiguity will be solved if we use LL(2) parser, i.e.
Always see for the two input symbols.
LL(1) grammars have distinct properties.
- No ambiguous grammar or left recursive grammar
can be LL(1).
Thus , the given grammar is not LL(1).
64
66. STACK Implementation
The predictive parser uses an explicit stack to keep track of pending
non-terminals. It can thus be implemented without recursion.
Note that productions output are tracing out a lefmost derivation
The grammar symbols on the stack make up left-sentential forms
66
67. LL(1) Stack
The input buffer contains the string to be parsed; $ is the end-
of-input marker
The stack contains a sequence of grammar symbols
Initially, the stack contains the start symbol of the grammar on
the top of $.
67
68. LL(1) Stack
The parser is controlled by a program that behaves as follows:
The program considers X, the symbol on top of the stack, and a, the
current input symbol.
These two symbols, X and a determine the action of the parser.
There are three possibilities.
68
69. LL(1) Stack
1. X a $,
the parser halts and annouces successful completion.
2. X a $
the parser pops x off the stack and advances input pointer to next
input symbol
3. If X is a nonterminal, the program consults entry M[x,a] of parsing
table M.
If the entry is a production M[x,a] = {x → uvw } then the
parser replaces x on top of the stack by wvu (with u on top).
As output, the parser just prints the production used:
x → uvw .
69
70. LL(1) Stack
Example:
Let’s parse the input
string
id+idid
Using the nonrecursive
LL(1) parser
70
Grammar:
E -> TE'
E'-> +TE'|ε
T -> FT'
T' -> *FT'|ε
F -> (E)|id
74. 74
id + id id
$
$
stack Parsing
Table M
T →
T
E'
F T'
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E' -> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id F -> (E)
75. 75
T'
id + id id
$
$
stack Parsing
Table M
→
E'
F
F id
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E' -> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id F -> (E)
76. 76
T'
id + id id
$
$
stack Parsing
Table M
E'
id
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E' -> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id F -> (E)
77. 77
T'
+ id id
$
$
stack Parsing
Table M
→
E'
T'
id
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E' -> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id F -> (E)
78. 78
+ id id
$
$
stack Parsing
Table M
→
E'
E' +
id
E'T
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E' -> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id F -> (E)
79. 79
+ id id
$
$
stack Parsing
Table M
E'
+
id
T
Non
Terminal
INPUT SYMBOLS
id + * ( ) $
E E -> TE’ E -> TE’
E’ E' -> +TE’ E’ -> ε E’ -> ε
T T -> FT’ T -> FT’
T’ T’ -> ε T’ -> *FT’ T’ -> ε T’ -> ε
F F -> id F -> (E)
80. 80
MATCHED STACK INPUT ACTION
E$ id+id * id$
TE’$ id+id * id$ E->TE’
FT’E’$ id+id * id$ T->FT’
id T’E’$ id+id * id$ F->id
id T’E’$ +id * id$ Match id
id E’$ +id * id$ T’->Є
id +TE’$ +id * id$ E’-> +TE’
id+ TE’$ id * id$ Match +
id+ FT’E’$ id * id$ T-> FT’
id+ idT’E’$ id * id$ F-> id
id+id T’E’$ * id$ Match id
id+id * FT’E’$ * id$ T’-> *FT’
id+id * FT’E’$ id$ Match *
id+id * idT’E’$ id$ F-> id
id+id * id T’E’$ $ Match id
id+id * id E’$ $ T’-> Є
id+id * id $ $ E’-> Є