[Zope-CVS] CVS: Products/ZCTextIndex - BaseIndex.py:1.24 CosineIndex.py:1.22 IIndex.py:1.9 OkapiIndex.py:1.28

Tim Peters tim.one@comcast.net
Tue, 28 May 2002 19:42:20 -0400


Update of /cvs-repository/Products/ZCTextIndex
In directory cvs.zope.org:/tmp/cvs-serv24154

Modified Files:
	BaseIndex.py CosineIndex.py IIndex.py OkapiIndex.py 
Log Message:
OkapiIndex.query_weight():  return an upper bound on possible doc scores.

CosineIndex.query_weight():  rewrote to squash code duplication.  No
change in what it returns (it's always returned an upper bound on
possible doc scores, although people probably haven't thought of it
that way before).

Elsewhere:  consequent changes.

Problems:

+ mhindex.py needs repair, but I can't run it.  Note that its current
  use of query_weight isn't legitimate (the usage doesn't conform to
  the IIndex interface -- passing a string is passing "a sequence",
  but not the intended sequence <wink>).

+ ZCTextIndex doesn't pass query_weight() on.

+ We've defined no methods to help clients compute what needs to be
  passed to query_weight (a sequence of only the positive terms).

  I changed mailtest.py to cheat, but it's doing a wrong thing for
  negative terms.

+ I expect it will be impossible to shake people from the belief that
  100.0 * score / query_weight is some kind of "relevance score".  It
  isn't.  So perhaps better not to expose this in ZCTextIndex.


=== Products/ZCTextIndex/BaseIndex.py 1.23 => 1.24 ===
 
     # Subclass must override.
-    # It's not clear what it should do; so far, it only makes real sense
-    # for the cosine indexer.
+    # It's not clear what it should do.  It must return an upper bound on
+    # document scores for the query.  It would be nice if a document score
+    # divided by the query's query_weight gave the proabability that a
+    # document was relevant, but nobody knows how to do that.  For
+    # CosineIndex, the ratio is the cosine of the angle between the document
+    # and query vectors.  For OkapiIndex, the ratio is a (probably
+    # unachievable) upper bound with no "intuitive meaning" beyond that.
     def query_weight(self, terms):
         raise NotImplementedError
 


=== Products/ZCTextIndex/CosineIndex.py 1.21 => 1.22 ===
         N = float(len(self._docweight))
         sum = 0.0
-        for wid in wids:
-            if wid == 0:
-                continue
-            map = self._wordinfo.get(wid)
-            if map is None:
-                continue
-            wt = math.log(1.0 + N / len(map))
+        for wid in self._remove_oov_wids(wids):
+            wt = inverse_doc_frequency(len(self._wordinfo[wid]), N)
             sum += wt ** 2.0
         return scaled_int(math.sqrt(sum))
 


=== Products/ZCTextIndex/IIndex.py 1.8 => 1.9 ===
         although not terms with a not.  If a term appears more than
         once in a query, it should appear more than once in terms.
+
+        Nothing is defined about what "weight" means, beyond that the
+        result is an upper bound on document scores returned for the
+        query.
         """
 
     def index_doc(docid, text):


=== Products/ZCTextIndex/OkapiIndex.py 1.27 => 1.28 ===
 
     def query_weight(self, terms):
-        # This method was inherited from the cosine measure, and doesn't
-        # make sense for Okapi measures in the way the cosine measure uses
-        # it.  See the long comment at the end of the file for how full
-        # Okapi BM25 deals with weighting query terms.
-        return 10   # arbitrary
+        # Get the wids.
+        wids = []
+        for term in terms:
+            termwids = self._lexicon.termToWordIds(term)
+            wids.extend(termwids)
+        # The max score for term t is the maximum value of
+        #     TF(D, t) * IDF(Q, t)
+        # We can compute IDF directly, and as noted in the comments below
+        # TF(D, t) is bounded above by 1+K1.
+        N = float(len(self._docweight))
+        tfmax = 1.0 + self.K1
+        sum = 0
+        for t in self._remove_oov_wids(wids):
+            idf = inverse_doc_frequency(len(self._wordinfo[t]), N)
+            sum += scaled_int(idf * tfmax)
+        return sum
 
     def _get_frequencies(self, wids):
         d = {}