[Zope-dev] zpatterns-0.4 ?

Phillip J. Eby pje@telecommunity.com
Tue, 06 Jun 2000 15:09:40 -0500


At 08:21 PM 6/6/00 +0400, Jephte CLAIN wrote:
>Hello,
>
>I have used ZPatterns-0.3.0 to complete a project that I shipped
>yesterday.

Congratulations, you're even braver than I am.  ;)


>Although I had to write a CatalogAwareRack/Specialist to allow me to
>search for rackmountables, I can say that ZPatterns has saved my life! I
>would like to thanks Philip for this great contribution. (too bad I no
>longer remember the url of that site where you can send so-called
>"virtual beers" to fellows on the internet :-))

That's okay, I don't drink.  :)


>Now, for a project I have to ship by the end of june, I would like to
>use zpatterns-0.4.0. I especially want to use the new search interface.
>Does Philip know when ZPatterns-0.4.0 is out? If it is until the middle
>of june, I can certainly wait. In the other case, I wonder if the search
>plugin (in the case it has already be written) can be used with
>zpatterns-0.3. In that case, I can do the work of adapting the code to
>zpatterns-0.3

I'm shooting for release by the end of next week.  Search plugins have not
been finished yet, but in any case they will NOT work with prior versions;
there is no place to plug them in, nor will any of the old code call them. :)


>Also, for that project, I will have to write a sql attribute provider.
>If one can give me advices on how Ty wants to implement SQL Attribute
>Provides after he's done with LDAP Attribute Provider, perhaps I can
>start the work (since I have a use for it) and let Ty continue the work
>when he has more time / have finished LDAP Attribute Provider. I already
>have some ideas on how to do it, but I lack to view how far the concept
>have to be pushed. Without any help, I will end up writing a SQL
>Attribute Provider that work only for me...

Ty and I just finished the first "real" attribute provider, an "ExprGetter"
provider, which is designed to work with SQL, LDAP, and other "method"
objects available in Zope.  It takes a "source expression" and a list of
"target expressions".  The source expression is evaluated whenever any of
the attributes provided by the provider are requested.  Then the target
expressions are evaluated in order, each one having the variable RESULT
equal to the result of the source expression.  An example:

Source expression:
  (SomeSQLMethod(searchkey=self.id) or [NOT_FOUND])[0]

Target expressions:
  myAttr1=RESULT.field1
  myAttr2="(%s)" % RESULT.field2
  myAttr3=RESULT.field3 * self.someOtherAttr

The above example will call SomeSQLMethod() (searched for in the Rack or
Customizer's acquisition tree) whenever myAttr1, myAttr2, or myAttr3 are
requested for the DataSkin/RackMountable.  It will be passed the DataSkin's
"id" attribute (note the self.attr notation for referencing data from the
skin in the source expression).  If the SQLMethod returns no rows, the
special NOT_FOUND value will be substituted, which tells the attribute
provider to pretend none of the attributes exist.  If it returns rows, the
first row ([0]) will be returned as RESULT for use in the target
expressions as shown.  The results of all three expressions will be cached
in the DataSkin's attribute cache for the remainder of the current
transaction, thus preventing a recurring call to the database.

"NOT_FOUND" and "self" are also available for use in the target
expressions, so one can conditionally simulate the existence or
non-existence of a given attribute, as well as do "computed attributes".
All expressions are VSEval-based (DocumentTemplate expr's) and "safely"
editable through the web.  Also, "self" is in its acquisition context
(either its folder or its Specialist) and you can freely reference other
provider-driven attributes or propertysheets (from the same or different
providers) in your expressions so long as you don't create an infinite
recursion loop.

This particular expression-based attribute provider is the first of a set
of defined mappings from attributes and events into expression calls.  We
have in fact designed a "little language" (tentatively called SkinScript)
which will let you render statements like the above as:

USE (SomeSQLMethod(searchkey=self.id) or [NOT_FOUND])[0]
TO COMPUTE
  myAttr1=RESULT.field1,
  myAttr2="(%s)" % RESULT.field2,
  myAttr3=RESULT.field3 * 10

And other things like:

NOTIFY some_expression AFTER ADDING, CHANGING, DELETING

USE expr TO SET attr1,attr2,attr3 

etc.

But it's not clear whether an actual SkinScript compiler will make it into
0.4.0.  The plan is to first make a provider class for each of the
SkinScript constructs, to debug them.  We can then replace them later with
a single "SkinScript provider".

We came up with SkinScript because we realized that configuring a gazillion
individual Attribute and SheetProviders was terribly tedious and messy, not
to mention that it was hard to see what was going on with your object in
one place.  Also, it eliminates the need for having specific kinds of
Providers for different databases.  (Note: we are not eliminating the
provider mechanism; we just plan to add a SkinScript attribute/sheet
provider to the things you can plug in to DataManagers.)

Anyway...  so yes, 0.4.0 *is* suffering a bit from feature creep, why do
you ask?  :)  Hooks will be there for indexing and other triggers, but here
is a list of some things that will definitely NOT be in 0.4.0:

* Index/trigger agents will not be guaranteed to work correctly without the
absolute latest Zope 2.2 stuff, due to transaction processing issues.  The
hooks will be there to play with, however.

* There is not yet any interface defined for Racks, Specialists, and other
DataManagers to actually ask indexes to *search* for anything, so if by
some miracle you get a trigger set up to properly catalog things, you'll
have to manually put methods in your Specialist to search the catalog for you.

* Attribute setters will be on the ugly side, especially for SQL, as you
will have to have SQL methods that can update one field at a time.  (This
is less of an issue for LDAP because the ZLDAP stuff already has provisions
to batch updates to the LDAP server.)  By 0.5.0, we hope to have an
extended form of SQLMethods (or some other way to do this) that will cache
the updated fields until a subtransaction (or full transaction) commit
takes place, then throw them to the server all at once.