CS 346: Compilers
Lexical Analyzer
Lexical Analyzer
Lexical Analyzer
reads the source program character by character to produce tokens
doesn
doesn’tt return a list of tokens at one shot
returns a token when the parser asks a token from it
source Lexical token
program Parser
Analyzer get next token
Token
Token
T k represents
t a sett off strings
t i described
d ib d by
b a pattern
tt
Identifier: a set of strings which start with a letter continues with letters and digits
The actual string is called as lexeme
Tokens: identifier, number, addop, delimeter, …
Attribute:
additional information specific to a lexeme
For simplicity, a token may have a single attribute which holds the required information for that
token
k
For identifiers, this attribute is a pointer to the symbol table and the symbol table holds the
actual attributes for that token
Some attributes:
<id,attr> where attr is pointer to the symbol table
<assgop,
g p,_> no attribute is needed ((if there is onlyy one assignment
g operator)
p )
<num,val> where val is the actual value of the number.
Token
k type andd its attribute
b uniquely
l identifies
d f a lexeme
l
Regular expressions: widely used technique to specify patterns
Terminology of Languages
Alphabet : a finite set of symbols (a, b, X etc.)
String :
Finite sequence of symbols on an alphabet
Sentences and words are also represented in terms of string
: empty string
|s| : length of string s
Language:
g g sets of strings
g over some fixed alphabet
p
: the empty set is a language
{}: the set containing empty string is a language
The
h set off well-formed
ll f d C programs is a language
l
The set of all possible identifiers is a language
Operators on Strings:
Concatenation: xy represents the concatenation of strings x and y
s =s s=s
sn = s s s .. s ( n times) s0 =
Operations on Languages
Concatenation:
L1L2 = { s1s2 | s1 L1 and s2 L2 }
Union
L1 L2 = { s | s L1 or s L2 }
Exponentiation:
L0 = {
{}} L1 = L L2 = LL
Kleene Closure
L* = L
i0
i
Positive Closure
L+ =
L i 1
i
Example
L1 = {a,b,c,d} L2 = {1,2}
L1L2 = {
{a1,a2,b1,b2,c1,c2,d1,d2}
, , , , , , , }
L1 L2 = {
{a,b,c,d,1,2}}
L13 = all strings with length three (using a,b,c,d}
L1* = all strings using letters a,b,c,d and empty string
L1+ = doesn’t include the empty string
Regular
g Expressions
p
Convenient and most popular way to describe tokens of a
programming language
A regular expression is built up of simpler regular expressions (using
defining rules)
Each
E h regular
l expression
i ddenotes a llanguage
regular set : Language denoted by a regular expression
Regular
g Expressions
p ((Rules))
Regular
g expressions
p over alphabet
p
Reg. Expr Language it denotes
{}
a {a}
( 1) | (r
(r ( 2) L( 1) L(r
L(r L( 2)
(r1) (r2) L(r1) L(r2)
((r))* ( ))*
((L(r))
(r) L(r)
(r)+ = (r)(r)*
(r)? = (r) |
Regular Expressions (cont.)
We may remove parentheses by using precedence rules
* highest
concatenation next
| lowest
ab*|c means (a(b)*)|(c)
Ex:
= {0,1}
0|1 => {0,1}
(0|1)(0|1) =>
> {00
{00,01,10,11}
01 10 11}
0* => { ,0,00,000,0000,....}
(0|1)* => all strings with 0 and 1, including the empty string
Regular Definitions
Writing regular expression for some languages may be difficult
Alternative: regular definitions
Assign names to regular expressions and we can use these names as symbols
to define other regular expressions
A regular
g definition
f is a sequence
q of the definitions of the form:
d1 r1 where di is a distinct name and
d2 r2 ri is a regular expression over symbols in
. {d1,d2,...,di-1}
dn r n
basic symbols previously defined names
Regular Definitions (cont.)
(cont )
Ex: Identifiers in Pascal
letter A | B | ... | Z | a | b | ... | z
digit 0 | 1 | ... | 9
id letter (letter | digit
g )*
If we try to write the regular expression representing identifiers without using regular definitions, that
regular expression will be complex.
(A|...|Z|a|...|z) ( (A|...|Z|a|...|z) | (0|...|9) ) *
Ex: Unsigned numbers in Pascal
g 0 | 1 | ... | 9
digit
digits digit +
opt-fraction ( . digits ) ?
opt-exponent ( E (+|-)? digits ) ?
unsigned-num digits opt-fraction opt-exponent
Finite Automata
Recognizer
program that takes a string x, and answers “yes”
yes if x is a sentence of that
language, and “no” otherwise
We call the recognizer
g of the tokens as a ffinite automaton
Finite automaton
Deterministic Finite Automaton (DFA)( )
Non-deterministic Finite Automaton (NFA)
DFA and NFA recognizeg regular
g sets
For lexical analyzer, DFA or NFA can be used
Which one to use?
deterministic – faster recognizer, but it may take more space
non-deterministic – slower, but it mayy take less spacep
Deterministic automatons are widely used for lexical analyzers
Finite Automata
For lexical analysis
Algorithm1: Regular Expression NFA DFA (two steps: first
to NFA,, then to DFA))
Algorithm2: Regular Expression DFA (directly convert a regular
expression into a DFA)
Non-Deterministic Finite Automaton (NFA)
A non
non-deterministic
deterministic finite automaton (NFA) is a mathematical model that
consists of:
S - a set of states
- a set of input symbols (alphabet)
move – a transition function move to map state-symbol pairs to sets of
states.
s0 - a start (initial) state
F – a set of accepting states (final states)
- transitions are allowed in NFAs
we can move from one state to another one without consumingg anyy symbol
y
A NFA accepts a string x, if and only if there is a path from the starting state to
one of accepting states such that edge labels along this path spell out x
NFA (Example)
a 0 is the start state s0
{2} is the set of final states F
a b = {a,b}
0 1 2
start S = {0,1,2}
b Transition Function: a b
0 {0,1} {0}
Transition graph of the NFA 1 _ {2}
2 _ _
Th language
The l recognized
i d by
b this
thi NFA is ( |b) * a b
i (a|b)
Deterministic Finite Automaton (DFA)
• Deterministic Finite Automaton (DFA) is a special form of a NFA
• no state has - transition
y
• for each symbol a and state s, there is at most one labeled edge
g
a leaving s
i.e. transition function is from pair of state-symbol to state (not
sett off states)
t t )
a
b a
The language recognized by
a b
0 1 2
this DFA is also (a|b) * a b
b
Implementing a DFA
Le us assume that the end of a string is marked with a special symbol (say
eos).The algorithm for recognition will be as follows:
s s0 { start from the initial state }
c nextchar { get the next character from the input string }
while
hil ((c !!= eos)) do{
d { do
d untilil the
h endd off the
h string
i }
begin
s move (s, c) { transition function }
c nextchar
h
end
if (s in F) then { if s is an accepting state }
return
t ““yes””
else
return “no”
Implementing a NFA
S -closure({s0}) { set all of states can be accessible from s0 by -transitions }
c nextchar
while ((c != eos))
begin
s -closure(move(S,c)) { set of all states that can be accessible from a state in S
c nextchar byy a transition on c }
end
if (SF != ) then { if S contains an accepting state }
return “yes”
yes
else
return “no”
Converting A Regular Expression into A NFA
(Thomson’s Construction)
One way to convert a regular expression into a NFA
Manyy others do exist !
Thomson
Thomson’ss Construction is simple and systematic
It guarantees that the resulting NFA will have exactly one final state, and one start state
Construction starts from simplest parts (alphabet symbols)
To create a NFA for a complex regular expression,
expression NFAs of its sub-
sub
expressions are combined to create its NFA
Thomson’s Construction ((cont.))
i f
• Recognize an empty string
a
• Recognize
g a symbol
y a in the alphabet
p i f
• If N(r1) and N(r2) are NFAs for regular expressions r1 and r2
• For
F regular
l expression
i r1 | r2
N(r1)
i f NFA for r1 | r2
N(r2)
Thomson’s Construction (cont.)
( )
• For regular expression r1 r2
i N(r1) N(r2) f Final state of N(r2) become final state of N(r1r2)
NFA for r1 r2
• For regular
g expression
p r*
i N(r) f
NFA for r*
Thomson’s Construction (Example - (a|b) * a )
a a
a:
(a | b)
b b
b:
a
( |b)
(a|b) *
b
a
(a|b) * a
a
b
Convertingg a NFA into a DFA ((subset construction))
put -closure({s0}) as an unmarked state into the set of DFA (DS)
while ((there is one unmarked S1 in DS)) do
begin -closure({s0}) is the set of all states, accessible
mark S1 from s0 by -transition.
for each input symbol a do
sett off states
t t tot which
hi h there
th is i a transition
t iti on
begin a from a state s in S1
S2 -closure(move(S1,a))
if (S2 is not in DS) then
add S2 into DS as an unmarked state
transfunc[S1,a] S2
end
endd
a state S in DS is an accepting state of DFA if a state in S is an accepting state of NFA
the start state of DFA is -closure({s0})
Computing -closure(T)
push all states of T onto stack;
initialize -closure(T) to T;
While (stack is not empty) {
Pop t, the top element, off the stack;
For (each state u with an edge from t to u labeled )
if (u is not in -closure(T) ){
add u to -closure(T)
closure(T)
push u onto stack;
}
}
Converting a NFA into a DFA (Example)
2 a 3
0 1 a
6 7 8
4 b 5
S0 = -closure({0}) = {0,1,2,4,7} S0 into DS as an unmarked state
mark S0
-closure(move(S
( )) = -closure({3,8})
( 0,,a)) ({ , }) = {1,2,3,4,6,7,8}
{ , , , , , , } = S1 S1 into DS
-closure(move(S0,b)) = -closure({5}) = {1,2,4,5,6,7} = S2 S2 into DS
transfunc[S0,a] S1 transfunc[S0,b] S2
mark S1
-closure(move(S1,a)) = -closure({3,8}) = {1,2,3,4,6,7,8} = S1
-closure(move(S1,b)) = -closure({5}) = {1,2,4,5,6,7} = S2
transfunc[S1,a] S1 transfunc[S1,b] S2
mark S2
-closure(move(S2,a)) = -closure({3,8}) = {1,2,3,4,6,7,8} = S1
-closure(move(S2,b)) = -closure({5}) = {1,2,4,5,6,7} = S2
f [S2,a]] S1
transfunc[S transfunc[S
f [S2,b] b] S2
Converting a NFA into a DFA (Example – cont.)
S0 is the start state of DFA since 0 is a member of S0={0,1,2,4,7}
S1 is an accepting
p g state of DFA since 8 is a member of S1 = {{1,2,3,4,6,7,8}
, , , , , , }
S1
S0 b a
S2
b
Converting Regular Expressions Directly to DFAs
Regular expression can be directly converted into a DFA (without creating a
NFA first)
Augment the given regular expression by concatenating it with a special
symbol #
r (r)# augmented regular expression
Create a syntax tree for this augmented regular expression
Syntax tree
Leaves:
L alphabet
l h b t symbols
b l (including
(i l di # andd theth empty t string)
t i ) in
i the
th
augmented regular expression
Intermediate nodes: operators
Number each alphabet symbol (including #) depending upon the positions
Regular Expression DFA (cont.)
(cont )
| ) * a ((a|b)
((a|b) | )* a # augmented
g regular
g expression
p
S t tree
Syntax ( |b) * a #
t off (a|b)
#
4
* a
3 • each symbol is numbered (positions)
| • each symbol is at a leave
a b • inner nodes are operators
1 2
followpos
Define the function followpos for the positions (positions
assigned to leaves)
followpos(i)
p -- set of ppositions which can follow
the position i in the strings generated by
the augmented regular expression
For example, ( a | b) * a #
1 2 3 4
followpos(1) = {1,2,3}
followpos is just defined for leaves,
f ll
followpos(2)
(2) = {1,2,3}
{1 2 3} it is not defined for inner nodes.
followpos(3) = {4}
followpos(4) = {}
firstpos, lastpos, nullable
To evaluate followpos, we need three more functions to be defined for the
nodes (not just for leaves) of the syntax tree
firstpos(n) -- set of the positions of the first symbols of strings
generated by the sub-expression rooted by n
lastpos(n) -- set of the positions of the last symbols of strings
generated by the sub-expression rooted by n
nullable(n) -- true if the empty string is a member of strings
generated by the sub-expression rooted by n
false otherwise
How to evaluate firstpos, lastpos, nullable?
n nullable(n) firstpos(n) lastpos(n)
l b l d
l f labeled
leaf true
leaf labeled false {i} {i}
with position i
| nullable(c1) or firstpos(c1) firstpos(c2) lastpos(c1) lastpos(c2)
c1 c2 nullable(c2)
nullable(c1) and if (nullable(c1)) if (nullable(c2))
c1 c2 nullable(c2) firstpos(c1) firstpos(c2) lastpos(c1) lastpos(c2)
else firstpos(c1) else lastpos(c2)
* true firstpos(c1) lastpos(c1)
c1
How to evaluate followpos?
Two-rules
Two rules define the function followpos:
1 If n is concatenation
1. concatenation-node
node with left child c1 and right child c2,
and i is a position in lastpos(c1), then all positions in firstpos
(c2) are in followpos (i)
2. If n is a star-node, and i is a position in lastpos(n), then all
positions in firstpos(n) are in followpos(i).
If firstpos and lastpos have been computed for each node, followpos
off eachh position can be b computedd by
b making
k one depth-first
d hf
traversal of the syntax tree
Example -- ( a | b) * a #
{1,2,3} {4} pink – firstpos
{1,2,3} {3} {4} # {4} blue – lastpos
4
{1,2} *{1,2} {{3}} a{{3}}
3
{1,2} | {1,2}
Then we can calculate followpos
{1} a {1} {2} b {2}
2 followpos(1) = {1,2,3}
1
followpos(2) = {1,2,3}
f ll
followpos(3)
(3) = {4}
followpos(4) = {}
• After we calculate follow positions, we are ready to create DFA
f the
for th regular
l expressioni
Algorithm (RE DFA)
Create the syntax tree of (r) #
Calculate the functions: followpos, firstpos, lastpos, nullable
Put firstpos (root) into the states of DFA as an unmarked state
while (there is an unmarked state S in the states of DFA) do
mark S
for each input symbol a do
let s1,...,sn are positions in S and symbols in those positions are a
S’ followpos(s1) ... followpos(sn)
move(S, a) S’
if (S’ is not empty and not in the states of DFA)
put S’ into the
h states off DFA as an unmarked
k d state
the start state off DFA is ffirstpos
p ((root))
the accepting states of DFA are all states containing the position of #
Example -- ( a | b) * a #
1 2 3 4
followpos(1)={1,2,3} followpos(2)={1,2,3} followpos(3)={4} followpos(4)={}
S1=firstpos(root)={1,2,3}
p ( ) { , , }
mark S1
a: followpos (1) followpos(3)={1,2,3,4}=S2 move(S1,a)=S2
b: followpos (2)={1
(2)={1,2,3}=S
2 3}=S1 move(S1,b)=S
b)=S1
mark S2
a: followpos(1) followpos(3)={1,2,3,4}=S2 move(S2,a)=S2
b followpos(2)={1,2,3}=S
b: f ll (2) {1 2 3} S1 move(S
(S2,b)=S
b) S1
b a
start state: S1 a
accepting states: {S2} S1 S2
b
Example -- ( a | ) b c* #
1 2 3 4
followpos(1)={2} followpos(2)={3,4} followpos(3)={3,4} followpos(4)={}
S1=firstpos(root)={1,2}
mark S1
a: followpos(1)={2}=S2 move(S1,a)=S2
b: followpos(2)={3,4}=S3 move(S1,b)=S3
mark S2
b: followpos(2)={3,4}=S3 move(S2,b)=S3
mark S3
S2
c: followpos(3)={3,4}=S3 move(S3,c)=S3 a
b
S1
b
start state: S1 S3 c
accepting states: {S3}
Minimizingg Number of States of a DFA
partition the set of states into two groups:
G1 : set of accepting states
G2 : set of non-accepting states
For each new group G
partition G into subgroups such that states s1 and s2 are in the same group iff for all input
symbols a, states s1 and s2 have transitions to states in the same group
Start state: the group containing the start state of the original DFA
Accepting states: the groups containing the accepting states of the original
DFA
Minimizing DFA - Example
a
G1 = {2}
2
a G2 = {1,3}
1 b a
b
3 G2 cannot be partitioned because
move(1,a)=2 move(1,b)=3
b
( )
move(3,a)=2 move(3,b)=3
( )
So, the minimized DFA (with minimum states)
b a
{1,3} a {2}
b
Minimizing DFA – Another Example
a
2
a a
1 4
Groups: {1,2,3} {4}
b
b a
{1 2}
{1,2} {3} a b
3 b no more partitioning 1->2 1->3
2->2 2->3
b 3->4 3->3
So, the minimized DFA b
{3}
a b
{1 2}
{1,2} a b
a {4}
An Example
l
Grammar Fragment (Pascal)
stmt if expr then stmt
| if expr then stmt else stmt
| ε
expr term relop term
| term
term id | num
Related Regular
g Definitions
if if
then then
else else
relop < | <= | = | <> | > | >=
id letter
l tt ( letter
l tt di it )*
| digit
num digit+ (. digit+ )? (E(+|-)? digit+ )?
delim blank | tab | newline
ws delim+
Tokens and Attributes
Regular Expression Token Attribute Value
ws - -
if if -
then then -
else else -
id id pointer
i t t
to entry
t
num num pointer to entry
< relop LT
<= relop LE
= relop
p EQ
Q
<> relop NE
> relop GT
=> relop GE
Transition Diagram for “relop”
return(relop,LE)
= 2
return(relop,NE)
> 3
other * return(relop,LT)
1 4
<
start = return(relop,EQ)
0 5
>
= return(relop,GE)
6 7
other
* return(relop,GT)
8
Identifiers and Keywords
Share a transition diagram
After reachingg accepting
p g state,, code determines if lexeme is keyword
y or
identifier
Easier than encoding exceptions in diagram
Simple technique is to appropriately initialize symbol table with keywords
letter or digit
start letter other * return(gettoken(),install_id())
9 10 11
Numbers
digit digit
start digit . digit
12 13 14 15
E
E
digit
+ or - digit other * return(get_token(), install_num())
16 17 18 19
digit
digit digit
start digit . digit other * return(get_token(), install_num())
20 21 22 23 24
digit
start digit other * return(get_token(), install_num())
25 26 27
Order of Transition Diagrams
Transition diagrams
g tested in order
Diagrams
g with low numbered start states tried before diagrams
g with
high numbered start states
Order influences efficiency of lexical analyzer
Trying Transition Diagrams
int next_td(void) {
switch (start) {
case 0:
0 start
t t = 99; b
break;
k
case 9: start = 12; break;
case 12: start = 20; break;
case 20: start = 25; break;
case 25: recover(); break;
default: error("invalid start state");
}
/* Possibly additional actions here */
/ /
return start;
}
Finding the Next Token
token nexttoken(void) {
while (1) {
switch (state) {
case 0:
c = nextchar();
if (c == ' ' || c=='\t' || c == '\n') {
state = 0;
l
lexeme_beginning++;
b i i ++
}
else if (c == '<') state = 1;
else if (c == '=') state = 5;
else if (c == '>') state = 6;
else state = next_td();
break;
… /* 27 other cases here */
The End of a Token
token nexttoken(void) {
while (1) {
switch
it h (
(state)
t t ) {
… /* First 19 cases */
case 19:
retract();
install_num();
return(NUM);
break;
… /* Final 8 cases */
Some Other Issues in Lexical Analyzer
Lexical analyzer has to recognize the longest possible string
Ex: identifier newval -- n ne new newv newva newval
What is the end of a token? Is there any character which marks the end of a
token?
normally
o a y not
ot defined
e e
Not an issue if the number of characters in a token is fixed
But < < or <> (in Pascal) (not fixed)
E d off an identifier
End id ifi : the
h characters
h cannot bbe iin an id
identifier
ifi that
h can markk the
h endd off token
k
We may need a lookahead
In Prolog:
g p :- X is 1. p :- X is 1.5.
The dot followed by a white space character can mark the end of a number.
B iff that
But h is not the
h case, the
h ddot must bbe treatedd as a part off the
h number.
b
Some Other Issues in Lexical Analyzer (cont.)
Skipping comments
Normally we don’t return a comment as a token
We skip a comment, and return the next token (which is not a comment) to the parser
So, the comments are only processed by the lexical analyzer, and don’t complicate the syntax
of the language
Symbol table interface
symbol table holds information about tokens (at least lexeme of identifiers)
how to implement the symbol table, and what kind of operations?
hash table – open addressing, chaining
putting into the hash table, finding the position of a token from its lexeme
Positions of the tokens in the file (for the error handling)