Anda di halaman 1dari 78

Inverted Index Construction!

Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)

Prasad

L3InvertedIndex

Unstructured data in 1650


n

Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia ? One could grep all of Shakespeares plays for Brutus and Caesar, then strip out plays containing Calpurnia ?
n n

Slow (for large corpora) Other operations (e.g., find the word Romans near countrymen) not feasible

Prasad

L3InvertedIndex

Term-document incidence
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony Brutus Caesar Calpurnia Cleopatra mercy worser

1 1 1 0 1 1 1

1 1 1 1 0 0 0

0 0 0 0 0 1 1

0 1 1 0 0 1 1

0 0 1 0 0 1 1

1 0 1 0 0 1 0

Brutus AND Caesar but NOT Calpurnia


Prasad L3InvertedIndex

1 if play contains word, 0 otherwise

Incidence vectors
n

So we have a 0/1 vector for each term. To answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented) bitwise AND. 110100 AND 110111 AND 101111 = 100100.

Prasad

L3InvertedIndex

Answers to query
n
n n n n

Antony and Cleopatra, Act III, Scene ii


Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain.

n
n n

Hamlet, Act III, Scene ii


Lord Polonius: I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me.

Prasad

L3InvertedIndex

Basic assumptions of Information Retrieval


n

Collection: Fixed set of documents Goal: Retrieve documents with information that is relevant to users information need and helps him complete a task

Prasad

L3InvertedIndex

The classic search model


TASK

Get rid of mice in a politically correct way


Mis-conception

Info Need

Info about removing mice without killing them


Mis-translation

Verbal form

How do I trap mice alive?


Mis-formulation

Query

mouse trap

SEARCH ENGINE

Query Refinement Prasad

Results Corpus L3InvertedIndex 7

How good are the retrieved docs?


n

Precision : Fraction of retrieved docs that are relevant to users information need Recall : Fraction of relevant docs in collection that are retrieved
More precise definitions and measurements to follow in later lectures

Prasad

L3InvertedIndex

Bigger corpora
n

Consider N = 1M documents, each with about 1K terms. Avg 6 bytes/term including spaces/punctuation
n

6GB of data in the documents.

Say there are m = 500K distinct terms among these.


L3InvertedIndex 9

Prasad

Cant build the matrix


n

500K x 1M matrix has half-a-trillion 0s and 1s. But it has no more than one billion 1s.
n

matrix is extremely sparse.

Why?

Whats a better representation?


n

We only record the 1 positions.

Prasad

L3InvertedIndex

10

Inverted index
n

For each term T, we must store a list of all documents that contain T. Do we use an array or a list for this?
2 1 4 2 8 3 16 32 64 128 5 8 13 21 34

Brutus Calpurnia Caesar

13 16 What happens if the word Caesar is added to document 14?

Prasad

L3InvertedIndex

11

Inverted index
n

Linked lists generally preferred to arrays


+ +

Dynamic space allocation Insertion of terms into documents easy Space overhead of pointers 2 1 13 4 2 16 8 3 16 5 32 8

Posting

Brutus Calpurnia Caesar

64 13 21

128 34

Dictionary
Prasad

Postings lists L3InvertedIndex 12 Sorted by docID (more later on why).

Inverted index construction


Documents to be indexed.

Friends, Romans, countrymen. Tokenizer

Token stream. More on these later. Modified tokens.

Friends Romans Linguistic modules friend Indexer friend roman

Countrymen

countryman 2 1 4 2

Inverted index.
Prasad

roman
L3InvertedIndex

countryman

1313 16

Indexer steps
n

Sequence of (Modified token, Document ID) pairs.

Doc 1 I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me.
Prasad

Doc 2 So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious
L3InvertedIndex

Term I did enact julius caesar I was killed i' the capitol brutus killed me so let it be with caesar the noble brutus hath told you

Doc # 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2

caesar
was ambitious

14

2 2

Sort by terms.
Core indexing step.

Prasad

L3InvertedIndex

Term Doc # I did enact julius caesar I was killed i' the capitol brutus killed me so let it be with caesar the noble brutus hath told you caesar was ambitious

1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

Term Doc # ambitious be brutus brutus capitol caesar caesar caesar did enact hath I I i' it julius killed killed let me noble so the the told you was was with

15

2 2 1 2 1 1 2 2 1 1 1 1 1 1 2 1 1 1 2 1 2 2 1 2 2 2 1 2 2

Multiple term entries in a single document are merged. Frequency information is added.
Why frequency? Will discuss later.

Prasad

L3InvertedIndex

Term Doc # ambitious be brutus brutus capitol caesar caesar caesar did enact hath I I i' it julius killed killed let me noble so the the told you was was with

2 2 1 2 1 1 2 2 1 1 1 1 1 1 2 1 1 1 2 1 2 2 1 2 2 2 1 2 2

Term Doc # ambitious be brutus brutus capitol caesar caesar did enact hath I i' it julius killed let me noble so the the told you was was with

2 2 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 2 2 1 2 2 2 1 2 2

Term freq 1 1 1 1 1 1 2 1 1 1 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1

16

The result is split into a Dictionary file and a Postings file.


Freq 2 2 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 2 2 1 2 2 2 1 2 2 1 1 1 1 1 1 2 1 1 1 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1
Term N docs Coll freq ambitious 1 1 be 1 1 brutus 2 2 capitol 1 1 caesar 2 3 did 1 1 enact 1 1 hath 1 1 I 1 2 i' 1 1 it 1 1 julius 1 1 killed 1 2 let 1 1 me 1 1 noble 1 1 so 1 1 the 2 2 told 1 1 you 1 1 was 2 2 with 1 1

Term Doc # ambitious be brutus brutus capitol caesar caesar did enact hath I i' it julius killed let me noble so the the told you was was with

Doc # 2 2 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 2 2 1 2 2 2 1 2 2

Freq 1 1 1 1 1 1 2 1 1 1 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1

Prasad

L3InvertedIndex

17

Where do we pay in storage?


Doc #
Term N docs Coll freq ambitious 1 1 be 1 1 brutus 2 2 capitol 1 1 caesar 2 3 did 1 1 enact 1 1 hath 1 1 I 1 2 i' 1 1 it 1 1 julius 1 1 killed 1 2 let 1 1 me 1 1 noble 1 1 so 1 1 the 2 2 told 1 1 you 1 1 was 2 2 with 1 1

Freq 2 2 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 2 2 1 2 2 2 1 2 2 1 1 1 1 1 1 2 1 1 1 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1

Will quantify the storage, later.

Terms

Prasad

L3InvertedIndex

Pointers

18

Query Processing
What? How?

Prasad

L3InvertedIndex

19

Query processing: AND


n

Consider processing the query: Brutus AND Caesar n Locate Brutus in the Dictionary;
n

Retrieve its postings. Retrieve its postings.

Locate Caesar in the Dictionary;


n

Merge the two postings: 2 1 4 2 8 3 16 5 32 8 13 64 21 128

Brutus 34 Caesar
20

Prasad

L3InvertedIndex

The merge
n

Walk through the two postings simultaneously, in time linear in the total number of postings entries

2 1

4 2

8 3

16 5

32 8 13

64 21

128

Brutus 34 Caesar

If the list lengths are x and y, the merge takes O(x+y) operations. Crucial: postings sorted by docID.
Prasad L3InvertedIndex

21

Boolean queries: Exact match


n

Boolean Queries are queries using AND, OR and NOT to join query terms
n n

Views each document as a set of words Is precise: document matches or not.

n n

Primary commercial retrieval tool for 3 decades. Professional searchers (e.g., lawyers) still like Boolean queries:
n

You know exactly what youre getting.


L3InvertedIndex 22

Prasad

Example: WestLaw
n

http://www.westlaw.com/

n n n

Largest commercial (paying subscribers) legal search service (started 1975; ranking added 1992) Tens of terabytes of data; 700,000 users Majority of users still use boolean queries Example query:
n

What is the statute of limitations in cases involving the federal tort claims act? LIMIT! /3 STATUTE ACTION /S FEDERAL /2 TORT /3 CLAIM
L3InvertedIndex 23

/3 = within 3 words, /S = in same sentence

Prasad

Example: WestLaw
n

http://www.westlaw.com/

n n

Another example query: n Requirements for disabled people to be able to access a workplace n disabl! /p access! /s work-site work-place (employment /3 place Note that SPACE is disjunction, not conjunction! Long, precise queries; proximity operators; incrementally developed; not like web search Professional searchers often like Boolean search: n Precision, transparency and control But that doesnt mean they actually work better ...
L3InvertedIndex 24

Prasad

Example: Using Search Connectors and Commands http://help.lexisnexis.com/


Sample Commands
n n n n n n n

Example Queries
n n n n

n n n n

AND/OR Connector ATLEAST n Command COMPANY Command HEADLINE Command LENGTH Command NOCAPS Command PRE/n (Preceded by n Words) Connector W/n (Within n Words) Connector W/p (Within Paragraph) Connector W/s (Within Sentence) Connector
Prasad

n n n

n n

election AND opinion poll ATLEAST5(free) COMPANY(Tivoli NOT IBM) gore vidal AND LENGTH(>1000) NOCAPS(aid) digital PRE/3 television airport W/5 noise W/5 nuisance olympic W/s bid
25

L3InvertedIndex

Query optimization
n n

Consider a query that is an AND of t terms. For each of the t terms, get its postings, then AND them together. What is the best order for query processing?
2 1 4 2 8 3 16 32 64 128 5 8 16 21 34

Brutus Calpurnia Caesar


Prasad

13 16
L3InvertedIndex 26

Query: Brutus AND Calpurnia AND Caesar

Query optimization example


n

Process in order of increasing frequency: n start with smallest set, then keep cutting further.
This is why we kept freq in dictionary

Brutus Calpurnia Caesar


Prasad

2 1

4 2

8 3

16 32 64 128 5 8 13 21 34

13 16
L3InvertedIndex 27

Execute the query as (Caesar AND Brutus) AND Calpurnia.

More general optimization


n

n n

e.g., (madding OR crowd) AND (ignoble OR strife) Get freqs for all terms. Estimate the size of each OR by the sum of its freqs (conservative). Process in increasing order of OR sizes.

Prasad

L3InvertedIndex

28

Space Requirements
n

The space required for the vocabulary is rather small. According to Heaps law the vocabulary grows as O(n), where is a constant between 0.4 and 0.6 in practice. Size of inverted file as a percentage of text (all words, nonstop words)
Index Addressing words Addressing documents Addressing 256 blocks
Small collection Medium collection Large collection (1Mb) (200Mb)

(2Gb) 35% 26% 0.5% 63% 47% 0.7%


29

45% 19% 18%

73% 26% 25%

36% 18% 1.7%

64% 32% 2.4%

Prasad

L3InvertedIndex

Space Requirements
n

To reduce space requirements, a technique called block addressing can be used n Advantages: n the number of pointers is smaller than positions n all the occurrences of a word inside a single block are collapsed to one reference n Disadvantages: n online (dynamic) search over the qualifying blocks necessary if exact positions are required

Prasad

L3InvertedIndex

30

Whats ahead in IR? Beyond term search


n

What about phrases? n Stanford University Proximity: Find Gates NEAR Microsoft.
n n

Need index to capture position information in docs. Zones in documents: Find documents with (author = Ullman) AND (text contains automata).

Prasad

L3InvertedIndex

31

Other Indexing Techniques


n

Even though Inverted Files is the method of choice, in the face of phrase and proximity queries, the following approaches were also developed:
n

Suffix arrays Signature files


L3InvertedIndex 32

Prasad

Dictionary and Postings; Query Processing

Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)

Prasad

L4DictonaryAndQP

33

This lecture agenda


n

Difficulties with tokenization Improving efficiency using enhanced data structures and algorithms : Skip pointers Phrasal queries
n

Generalizing indexing structures

34

Recall basic indexing pipeline


Documents to be indexed.

Friends, Romans, countrymen. Tokenizer

Token stream.

Friends Romans Linguistic modules

Countrymen

Modified tokens.

friend Indexer friend

roman

countryman 2 1 13 4 2 16

Inverted index.

roman countryman

Parsing a document
n

What format is it in?


n

pdf/word/excel/html?

n n

What language is it in? What character set is in use?

Each of these is a classification problem. But these tasks are often done heuristically
36

Complications: Format/language
n

Documents being indexed can include docs from many different languages
n

A single index may have to contain terms of several languages.

Sometimes a document or its components can contain multiple languages/formats


n

French email with a German pdf attachment. A file? An email? (Perhaps one of many in an mbox.) An email with 5 attachments? A group of files (PPT or LaTeX in HTML) 37

What is a unit document?


n n n n

Tokenization

38

Tokenization
n n

Input: Friends, Romans and Countrymen Output: Tokens


n n n

Friends Romans Countrymen

Each such token is now a candidate for an index entry, after further processing But what are valid tokens to emit?
39

Tokenization
n

Issues in tokenization: n Finlands capital


Finland? Finlands? Finlands? n Hewlett-Packard Hewlett and Packard as two tokens?
n n
n

State-of-the-art: break up hyphenated sequence. co-education ?


Its effective to get the user to put in possible hyphens

San Francisco: one token or two? How do you decide it is one token?
40

Numbers
n n n n n

3/12/91 Mar. 12, 1991 55 B.C. B-52 My PGP key is 324a3df234cb23e 100.2.86.144
n

Often, dont index as text.


n

But often very useful: think about things like looking up error codes/stacktraces on the web (One answer is using n-grams.) Creation date, format, etc.

Will often index meta-data separately


n

41

Tokenization: Language issues


n

L'ensemble one token or two? n L ? L ? Le ? n Want lensemble to match with un ensemble German noun compounds are not segmented
n n

Lebensversicherungsgesellschaftsangestellter life insurance company employee

42

Tokenization: Language issues


n

Chinese and Japanese have no spaces between words:


n

Cannot always guarantee a unique tokenization

Further complicated in Japanese, with multiple alphabets intermingled


n

Dates/amounts in multiple formats

500$500K(6,000)

Katakana

Hiragana

Kanji

Romaji
43

End-user can express query entirely in hiragana!

Tokenization: Language issues


n

Arabic (or Hebrew) is basically written right to left, but with certain items like numbers written left to right Words are separated, but letter forms within a word form complex ligatures
. start Algeria achieved its independence in 1962 after 132 years of French occupation. With Unicode, the surface presentation is 44 complex, but the stored form is straightforward

n n

Normalization
n

Need to normalize terms in indexed text as well as query terms into the same form
n

We want to match U.S.A. and USA

We most commonly implicitly define equivalence classes of terms


n

e.g., by deleting periods in a term


Enter: window Search: window, windows Enter: windows Search: Windows, windows Enter: Windows Search: Windows
45

Alternative is to do asymmetric expansion:


n n n

Potentially more powerful, but less efficient

Normalization: Other languages


n n

Accents: rsum vs. resume. Most important criterion:


n

How are your users likely to write their queries for these words?

Even in languages that have accents, users often may not type them German: Tuebingen vs. Tbingen
n

Should be equivalent
46

Normalization: Other languages


n

Need to normalize indexed text as well as query terms into the same form 730 vs. 7/30 Character-level alphabet detection and conversion
n n

Tokenization not separable from this. Sometimes ambiguous: Morgen will ich in MIT

Is this German mit?

47

Case folding
n

Reduce all letters to lower case


n

exception: upper case (in mid-sentence?)


n n n

e.g., General Motors Fed vs. fed SAIL vs. sail

Often best to lower case everything, since users will use lowercase regardless of correct capitalization

48

Stop words
n

With a stop list, you exclude from dictionary entirely, the commonest words. Intuition:
n n

They have little semantic content: the, a, and, to, be They take a lot of space: ~30% of postings for top 30

But the trend is away from doing this indiscriminately:


n n

Good compression techniques mean the space for including stopwords in a system is very small Good query optimization techniques mean you pay little at query time for including stop words. You need them for:
n n n

Phrase queries: King of Denmark Various song titles, etc.: Let it be, To be or not to be Relational queries: flights to London

49

Thesauri and soundex


n

Handle synonyms and homonyms


n

Hand-constructed equivalence classes


n n

e.g., car = automobile color = colour

n n

Rewrite to form equivalence classes Index such equivalences


n

When the document contains automobile, index it under car as well (usually, also viceversa) When the query contains automobile, look under car as well

Or expand query?
n

50

Soundex
n

Traditional class of heuristics to expand a query into phonetic equivalents


n n

Language specific mainly for names E.g., chebyshev tchebycheff

More on this later ...

51

Lemmatization
n

Reduce inflectional/variant forms to base form E.g.,


n n

am, are, is be car, cars, car's, cars' car

the boy's cars are different colors the boy car be different color Lemmatization implies doing proper reduction to dictionary headword form
52

Stemming
n

Reduce terms to their roots before indexing Stemming suggest crude affix chopping
n n

language dependent e.g., automate(s), automatic, automation all reduced to automat. for exampl compress and compress ar both accept as equival to compress
53

for example compressed and compression are both accepted as equivalent to compress.

Porters algorithm
n

Commonest algorithm for stemming English


n

Results suggest at least as good as other stemming options phases applied sequentially each phase consists of a set of commands sample convention: Of the rules in a compound command, select the one that applies to the longest suffix.

Conventions + 5 phases of reductions


n n n

54

Typical rules in Porter


n n n n

sses ss ies i ational ate tional tion Weight of word sensitive rules (m>1) EMENT
n n

n n

replacement replac cement cement


55

Other stemmers
n

Other stemmers exist, e.g., Lovins stemmer


http://www.comp.lancs.ac.uk/computing/research/stemming/general/ lovins.htm
n

Single-pass, longest suffix removal (about 250 rules) Motivated by linguistics as well as IR

Full morphological analysis at most modest benefits for retrieval Do stemming and other normalizations help?
n

Often very mixed results: really help recall for some queries but harm precision on others 56

Language-specificity
n

Many of the above features embody transformations that are


n n

Language-specific and Often, application-specific

These are plug-in addenda to the indexing process Both open source and commercial plug-ins available for handling these

57

Dictionary entries first cut


ensemble.french
.japanese

MIT.english mit.german guaranteed.english entries.english sometimes.english tokenization.english

These may be grouped by language (or not). More on this in ranking/query processing.

58

Faster postings merges: Skip pointers

59

Recall basic merge


n

Walk through the two postings simultaneously, in time linear in the total number of postings entries
2 1 4 2 8 3 16 5 32 8 64 17 21 128 Brutus 31 Caesar

If the list lengths are m and n, the merge takes O(m+n) operations. Can we do better? Yes, if index isnt changing too fast.
60

Augment postings with skip pointers (at indexing time)


16 128

2
8

16

32
31

64

128

1
n

17

21

31

Why?
n

To skip postings that will not figure in the search results. Where do we place skip pointers?

How?
n

61

Section 2.3 :

Page 35(print)/37 (online)

62

Query processing with skip pointers


16 128

2
8

16

32
31

64

128

17

21

31

Suppose weve stepped through the lists until we process 8 on each list. When we get to 16 on the top list, we see that its successor is 32. But the skip successor of 8 on the lower list is 31, so we can skip ahead past the intervening postings. 63

Where do we place skips?


n

Tradeoff:
n

More skips shorter skip spans more likely to skip. But lots of comparisons to skip pointers. Fewer skips few pointer comparison, but then long skip spans few successful skips.

64

Placing skips
n

n n

Simple heuristic: for postings of length L, use L evenly-spaced skip pointers. This ignores the distribution of query terms. Easy if the index is relatively static; harder if L keeps changing because of updates. This definitely used to help; with modern hardware it may not (Bahle et al. 2002)
n

The cost of loading a bigger postings list outweighs the gain from quicker in-memory merging

65

Phrase queries

66

Phrase queries
n

Want to answer queries such as stanford university as a phrase Thus the sentence I went to university at Stanford is not a match.
n

The concept of phrase queries easily understood by users; about 10% of web queries are phrase queries

No longer suffices to store only <term : docs> entries


67

Solution 1: Biword indexes


n

Index every consecutive pair of terms in the text as a phrase For example the text Friends, Romans, Countrymen would generate the biwords
n n

friends romans romans countrymen

Each of these biwords is now a dictionary term Two-word phrase query-processing is now immediate.
68

Longer phrase queries


stanford university palo alto can be broken into the Boolean query on biwords: stanford university AND university palo AND palo alto
n

Without the docs, we cannot verify that the docs matching the above Boolean query do contain the phrase.

Can have false positives!

Extended biwords
n

Parse the indexed text and perform part-of-speechtagging (POST). Bucket the terms into (say) Nouns (N) and articles/ prepositions (X). Now deem any string of terms of the form NX*N to be an extended biword.
n

Each such extended biword is now made a term in the dictionary. N X X N

Example: catcher in the rye Query processing: parse it into Ns and Xs


n n

Segment query into enhanced biwords Look up index

70

Issues for biword indexes


n n

False positives, as noted before Index blowup due to bigger dictionary For extended biword index, parsing longer queries into conjunctions:
n

E.g., the query tangerine trees and marmalade skies is parsed into tangerine trees AND trees and marmalade AND marmalade skies
71

Not standard solution (for all biwords)

Solution 2: Positional indexes


n

Store, for each term, entries of the form:


<number of docs containing term; doc1: position1, position2 ; doc2: position1, position2 ; etc.>

72

Positional index example


<be: 993427; 1: 7, 18, 33, 72, 86, 231; 2: 3, 149; 4: 17, 191, 291, 430, 434; 5: 363, 367, >
n n

Which of docs 1,2,4,5 could contain to be or not to be?

Can compress position values/offsets Nevertheless, this expands postings storage substantially
73

Processing a phrase query


n

Extract inverted index entries for each distinct term: to, be, or, not. Merge their doc:position lists to enumerate all positions with to be or not to be.
n

to:
n

2:1,17,74,222,551; 4:8,16,190,429,433; 7:13,23,191; ... 1:17,19; 4:17,191,291,430,434; 5:14,19,101; ...

be:
n

Same general method for proximity searches

Proximity queries
n

LIMIT! /3 STATUTE /3 FEDERAL /2 TORT Here, /k means within k words of. Clearly, positional indexes can be used for such queries; biword indexes cannot.

75

Positional index size


n

Can compress position values/offsets Nevertheless, a positional index expands postings storage substantially Nevertheless, it is now used because of the power and usefulness of phrase and proximity queries whether used explicitly or implicitly in a ranking retrieval system.
76

Positional index size


n

Need an entry for each occurrence, not just once per document Index size depends on average document Why? size
n n

Average web page has <1000 terms SEC filings, books, even some epic poems easily 100,000 terms
Postings 1 1
Positional postings

Consider a term with frequency 0.1%


Document size
1000 100,000 1 100

Rules of thumb
n

A positional index is 24 times as large as a nonpositional index Positional index size 3550% of volume of original text Caveat: all of this holds for English-like languages

Combinational Schemes: n Positional and biword approaches can be profitably combined


n

For particular phrases (Michael Jackson, Britney Spears) it is inefficient to keep on merging positional postings lists
n

Even more so for phrases like The Who

78

Anda mungkin juga menyukai